## Counting minimal puzzles: subsets, supersets, etc.

Everything about Sudoku that doesn't fit in one of the other sections

### Re: Counting minimal puzzles: subsets, supersets, etc.

Thanks, ronk, I appreciate that.
Red Ed

Posts: 633
Joined: 06 June 2005

### subset method for the grey zone ?

Instead of harping on about the past, here is a much more interesting problem.

As I mentioned in the "Sudoku grey zone" thread, we know a lot about puzzles in T&E(1) and about puzzles with extreme SER, but we are currently lacking knowledge about puzzles in the grey zone (SER between 9.0 and 10.5).

I don't think this definition of the grey zone can be used as such for generating new puzzles because SER is too hard to compute. But testing if a puzzle is in T&E(1) can be very fast. So, here is my question/challenge:

Using the subsets/supersets method, can one generate puzzles outside T&E(1) with known bias?

I know this is beyond the practical reach of top-down or controlled-bias generators (the complement of T&E(1) contains less than 1 minimal in ~ 30,000,000).
denis_berthier
2010 Supporter

Posts: 1262
Joined: 19 June 2007
Location: Paris

### Re: Counting minimal puzzles: subsets, supersets, etc.

Hi, denis_berthier!
denis_berthier wrote:This is an obvious falsification of history. If the "subset method" had been described four years ago, how could anyone believe you didn't run it 10 minutes, instead of speaking only of the "superset method" and admitting that ...

denis_berthier wrote:If you think that the corrections you made at my request are only typography, that's hopeless.
...
From your sentence "...", I see that you have finally understood that you don't have to define the probability for all the m.

denis_berthier wrote:First version: "Pr(m), the probability that m is among the subset of things picked randomly by A" - totally meaningless.
Corrected version: "Pr(m) ≡ ΣA∋m Pr(A=A)"
If you consider these are the same things, I see there's still no way of having a serious discussion with you and your politician's tricks; I don't see any reason of loosing more time with all this.

Etc.

Your phrases cited, to my mind, should be qualified as demonstrative lack of respect for the other participants in the discussion. IMO such style of discussion is absolutely unacceptable at public discussion places.

I think you should apologize to Red Ed and blue. Please, change your discussion style, otherwise in a short time you'll find nobody being ready to discuss something with you.

Serg
Serg
2018 Supporter

Posts: 627
Joined: 01 June 2010
Location: Russia

### Re: Counting minimal puzzles: subsets, supersets, etc.

Hi Serg,

Probably you should read the original "real distribution" thread in order to get a more equilibrated view of the "discussion styles" between Red Ed and me. I agree that this is not optimal.
As to blue, I can't see why you involve him in this affair.
denis_berthier
2010 Supporter

Posts: 1262
Joined: 19 June 2007
Location: Paris

### Re: Counting minimal puzzles: subsets, supersets, etc.

FWIW: I haven't felt personally offended.

The general atmosphere has been poor, and I find that offensive.
I think Red Ed has been exceedingly polite, given the circumstances.

Blue.
blue

Posts: 779
Joined: 11 March 2013

### Re: Counting minimal puzzles: subsets, supersets, etc.

I would like to think that we contribute here to share ideas and to benefit from each others respective strengths rather than to show off our superiority. I therefore respond better to those who post openly in the same spirit than those who are guarded and evasive in their replies, and I won't respond to at all to those who are downright disrespectful unless I feel that others may benefit from my reply.

I therefore find it extraordinary that this exchange has lasted as long as it has.
David P Bird
2010 Supporter

Posts: 1040
Joined: 16 September 2008
Location: Middle England

### Re: subset method for the grey zone ?

denis_berthier wrote:Using the subsets/supersets method, can one generate puzzles outside T&E(1) with known bias?
I know this is beyond the practical reach of top-down or controlled-bias generators (the complement of T&E(1) contains less than 1 minimal in ~ 30,000,000).

What's T&E(1)? What do the (known) puzzles outside of T&E(1) typically look like -- are they minimal, and what numbers of clues?

There's something else I want to try first, but then I might revisit the ~(T&E(1)) question.

(EDIT: or maybe not. I've looked at the "grey zone" thread now and don't see any sensible way of tackling the problem.)
Red Ed

Posts: 633
Joined: 06 June 2005

### Re: subset method for the grey zone ?

Red Ed wrote:
denis_berthier wrote:Using the subsets/supersets method, can one generate puzzles outside T&E(1) with known bias?
I know this is beyond the practical reach of top-down or controlled-bias generators (the complement of T&E(1) contains less than 1 minimal in ~ 30,000,000).

What's T&E(1)? What do the (known) puzzles outside of T&E(1) typically look like -- are they minimal, and what numbers of clues?

T&E(1): you must have seen in the "grey zone" thread. Otherwise, you can see section 5.6 of my last book ("Pattern-Based Constraint Satisfaction").
non-T&E(1), or non-T&E for short, is the tail of the distribution wrt hardness. So, this is a hard problem anyway.
My vague estimate of 1 puzzle in ~30,000,000 relies on calculations by Blue, who found only 2 in ~60,000,000 by top-down generation (in computations that weren't specially tailored for this problem).
I think non-T&E is a good way of defining the tail of the distribution:
- unlike any definition based on SER bounds, it is stable wrt to isos (and wrt to "analogy", i.e. the restricted logical equivalence row/col block/square);
- it is (equivalent to) a pure logic definition (not solvable by braids);
- it is (relatively) easy to compute;
- it is broad enough to include all the known "hardest".

All the puzzles I speak of here are minimal. It'd be too easy to take any minimal and to delete clues until it is no longer in T&E (it may not work for all the minimals, but it should work often).
As for how non-T&E puzzles look (i.e. whether they have any peculiarities), I think no one can say for sure as of now. But it's unlikely that they have any exterior sign of non-T&E-ishness. [I haven't followed the work on UAs and I don't know if there's any relation between UAs and (any notion of) hardness, but if so, it might be a useful filter].
Number of clues: any (or, if this is too hard - any reachable number).

Why do I think this problem might be reachable by subset/superset?
We don't know how non-T&E puzzles are scattered within all the minimals, in particular whether all/most of the complete grids have any. If the latter was true, there would be so few for each complete grid that controlled-bias would have no chance to reach them; perhaps, chances would be better for subset/superset, as it looks systematically for all the minimals related to an s-clue sub-grid (but, of course, all depends also on how non-T&E puzzles are scattered among the s-clue sub-grids).
If they are more concentrated on a small part of the complete grids, chances are very low by either method.
denis_berthier
2010 Supporter

Posts: 1262
Joined: 19 June 2007
Location: Paris

### ~TE(1)

I've run a "subsets, size = 30" search for non-T&E(1) puzzles, for 20 hours, with dismal results.

Code: Select all
`513242719 samples5225451 size 30 hits~T&E(1) puzzles+----+-----------+--------------+------------+| Cl |     Count |  E(nr/grid)  | E(rel err) |+----+-----------+--------------+------------+| 23 |         5 |  4.5652e+005 |   44.72%   || 24 |         2 |  1.5130e+006 |   70.71%   |+----+-----------+--------------+------------+minimal puzzles+----+-----------+--------------+------------+| Cl |     Count |  E(nr/grid)  | E(rel err) |+----+-----------+--------------+------------+| 20 |     10388 |  3.1624e+006 |    1.71%   || 21 |    690493 |  1.2823e+009 |    0.40%   || 22 |  13018356 |  1.6117e+011 |    0.17%   || 23 |  75534932 |  6.8966e+012 |    0.10%   || 24 | 140772103 |  1.0650e+014 |    0.07%   || 25 |  86989379 |  6.2519e+014 |    0.06%   || 26 |  18456457 |  1.4856e+015 |    0.08%   || 27 |   1378911 |  1.5262e+015 |    0.17%   || 28 |     36564 |  7.2843e+014 |    0.71%   || 29 |       304 |  1.6049e+014 |    6.28%   || 30 |         1 |  2.7453e+013 |  100.00%   |+----+-----------+--------------+------------+`

The results are too crude to draw any serious conclusions about the total number of non-T&E(1) puzzles.
They (weakly) suggest one in ~15 million size 23 puzzles, and one in ~70 million size 24 puzzles.
Only 18,456,457 size 26 and 1,378,911 size 27 puzzles were actually tested, though -- where puzzle distribution peaks.

The other news isn't good either.
I missed out on saving the puzzles from a 4 hour "test" run -- lost 2 of 7.
The other 5, don't look very impressive (to SE):

Code: Select all
`size 23:...4...89............1683......47..38..6....1.9.......3...856...52...1....9.....2 SER=8.3.....3....1.4.....9...6..........6.9.2.7...454...9.18..3...4...8.......4..1...896 SER=9.1....8.6...96..2...2....35...7....3.....84...1.1.7..9.............5...8.7.8.1..49. SER=8.4.2...65........162..4.....97...............13...1.86.5...5.9..738..6.....5.2..... SER=3.6size 24:8...5...9....2.4.......4..16...8.1..5....9.6.3.......8.3.79.....5....93..6.4..5.. SER=9.2`

The SER 8.3 puzzle, is SER 9.6, if the uniqueness-based techniques are disabled.

The SER 3.6 puzzle, gets to a point where a "pure" naked triple is needed -- candidates for all 3 digits in all 3 cells.
When it's "singles" T&E, that puts it in T&E(2).

Regards,
Blue.
blue

Posts: 779
Joined: 11 March 2013

### Re: ~TE(1)

blue wrote:I've run a "subsets, size = 30" search for non-T&E(1) puzzles, for 20 hours, with dismal results.

I would understand if you don't want to run longer computations, but the results are no so negative. Of course, the number of puzzles per hour is still worse than minimals for controlled-bias, but that was to be expected, as the target is much narrower.

blue wrote:The other 5, don't look very impressive (to SE):
[...]
The SER 8.3 puzzle, is SER 9.6, if the uniqueness-based techniques are disabled.
The SER 3.6 puzzle, gets to a point where a "pure" naked triple is needed -- candidates for all 3 digits in all 3 cells.
When it's "singles" T&E, that puts it in T&E(2).

I find this interesting, because it gives an idea of the obstructions to being in T&E.
The pure NT puzzle is one of the rare NT cases not subsumed by whips.
I computed the BpB classification for the 5 puzzles:

Code: Select all
`size 23:...4...89............1683......47..38..6....1.9.......3...856...52...1....9.....2 SER=8.3    in gT&E, i.e. p=1.....3....1.4.....9...6..........6.9.2.7...454...9.18..3...4...8.......4..1...896 SER=9.1    in T&E(B2), i.e. p=2....8.6...96..2...2....35...7....3.....84...1.1.7..9.............5...8.7.8.1..49. SER=8.4    in gT&E, i.e. p=1.2...65........162..4.....97...............13...1.86.5...5.9..738..6.....5.2..... SER=3.6    in gT&E, i.e. p=1size 24:8...5...9....2.4.......4..16...8.1..5....9.6.3.......8.3.79.....5....93..6.4..5.. SER=9.2   in T&E(B2), i.e. p=2`

Of course, this sample is much too small to draw any conclusions, but I'm surprised to find 2 p=2 for 3 p=1.
denis_berthier
2010 Supporter

Posts: 1262
Joined: 19 June 2007
Location: Paris

### Re: ~TE(1)

Hi Denis,

denis_berthier wrote:Of course, this sample is much too small to draw any conclusions, but I'm surprised to find 2 p=2 for 3 p=1.

Thanks for the analysis.
I've got the code running for another 20 hrs (2 x 10).
No time to comment on other aspects now ...

Blue.
blue

Posts: 779
Joined: 11 March 2013

### Re: ~TE(1)

blue wrote:I've run a "subsets, size = 30" search for non-T&E(1) puzzles, for 20 hours, with dismal results.
Yep, a pity that more ~T&E(1) puzzles didn't pop out ... but ... wow ... how are you finding minimals so quickly? Over 1 million 24s in ten minutes: my program's much slower than that (even allowing for the vintage hardware on which it's running). Could you share the source code?
Red Ed

Posts: 633
Joined: 06 June 2005

### Re: ~TE(1)

Hi Ed,

Now I wish I had looked at your source code at some point.
Is the search depth first, or breadth first ? (BFS for me).

When I ran my code for 1 minute, it said it handled 4,955,364 "valid but not necceessarily minimal" puzzles with a total of 127,486,165 clues, but it only needed to check 5,195,113 of the clues for being redundant. That was against a backdrop of it testing 434,656 size 30 grids for being valid puzzles, and finding 4,430 valid cases.

Do you get anything similar to that (for ratios) ?

For the search code, when a valid puzzle, P0, can be produced from puzzles (P0 + c1),..,(P0 + cn), then before any tests are done with P0, it will have acquired a mask of known non-redundant clues that is the OR of the masks of actual non-redundant clues for (P0 + c1),...,(P0 + cn). Whether P0 is valid or not, if (P0 + c1),..,(P0 + cn) were valid, then P0 gets checked at most once for being valid ... as the redundancy check for c1 in (P0 + c1), say. For the other redundancy checks, the code does a binary search looking for P0 to have been produced from an earlier "input". I said "at most once", since if (P0 + c1) is the first valid P0 extension to come up in the testing, then the code can know, for some c0, that (P0 + c0) would have come up first if it had been valid, and it can still tell from the binary search for P0, whether c1 is redundant.

This is the source code, minus the solver code, RNG code, and random grid code.
(Read "non-redundant" where it says "forced" or "forced keeper").
If the difference is due to the solver code, please don't ask for that.
Hidden Text: Show
Code: Select all
`#include <stdio.h>#include <stdlib.h>#include <string.h>#include <math.h>#include <process.h>#include <windows.h>#include "Random.h"#include "Solver.h"#include "rangrid.h"Random RNG;Solver solver;#define TIMER_VAL 600000 // 10 min.//#define DUMP_PUZZLES//#define SHOW_COUNT_STATS__int64 sampleCount;__int64 subgridHits;unsigned char solutionGrid[81];int cellList[81];enum {    SubgridSize = 30,    MinPuzzleSize = 17,};#if defined(DUMP_PUZZLES)FILE *fp_dump;void dump_subset_puzzle(__int64 mask){    unsigned char puzzle[81];    memset(puzzle, 0, sizeof(puzzle));    for (i = SubgridSize; --i >= 0; ) {        if (mask & ((__int64)1 << (SubgridSize - 1 - i)))            continue;        puzzle[cellList[i]] = solutionGrid[cellList[i]];    }    fprintf(fp_dump, "%011I64d,%2d:", sampleCount, workingSize);    for (i = 0; i < 81; i++)        fprintf(fp_dump, "%d", puzzle[i]);    fprintf(fp_dump, "\n");}#endif__int64 subsetSum[SubgridSize + 1];__int64 subsetSum2[SubgridSize + 1];struct subgrid_data {    __int64 removedGivensMask;            // submask of '(1 << (SubgridSize)) - 1'    __int64 knownForcedKeepersMask;};subgrid_data *data_ptr;int allocCount;void do_subset_test(){    int minimal[SubgridSize + 1];    memset(minimal, 0, sizeof(minimal));    int i;    if (!allocCount)    {        allocCount = 65536;        data_ptr = (subgrid_data *)malloc(allocCount * sizeof(subgrid_data));        if (!data_ptr) {            puts("no RAM");            exit(-1);        }    }    int used = 1;        // count of valid items in 'data_ptr[]'    data_ptr[0].removedGivensMask = 0;    data_ptr[0].knownForcedKeepersMask = 0;    int firstPuzzle[SubgridSize + 1];    int puzzleCount[SubgridSize + 1];    firstPuzzle[SubgridSize] = 0;    puzzleCount[SubgridSize] = 1;    for (int workingSize = SubgridSize; workingSize >= MinPuzzleSize; --workingSize)    {        // top of pass to produce puzzles of size 'workingSize - 1',        // from puzzles of size 'workingSize'        if (!puzzleCount[workingSize])            break;        firstPuzzle[workingSize - 1] = used;        puzzleCount[workingSize - 1] = 0;        int end = firstPuzzle[workingSize] + puzzleCount[workingSize];        for (int index = firstPuzzle[workingSize]; index < end; index++)        {            __int64 mask = data_ptr[index].removedGivensMask;            __int64 dontTryMask = mask | data_ptr[index].knownForcedKeepersMask;            if (dontTryMask == (((__int64)1 << SubgridSize) - 1))            {                // puzzle is minimal                ++minimal[workingSize];#if defined(DUMP_PUZZLES)                dump_subset_puzzle(mask);#endif                continue;            }            if (workingSize < SubgridSize)            {                solver.Reset();                for (i = SubgridSize; --i >= 0; ) {                    if (mask & ((__int64)1 << (SubgridSize - 1 - i)))                        continue;                    solver.PushCellValue(cellList[i] / 9, cellList[i] % 9, solutionGrid[cellList[i]]);                }            }            int found = 0;            int endMark = used;            int updateList[SubgridSize];            for (i = 0; i < SubgridSize; i++)            {                if (dontTryMask & ((__int64)1 << (SubgridSize - 1 - i)))                    continue;                __int64 newRemovedGivensMask = mask | ((__int64)1 << (SubgridSize - 1 - i));                if (((__int64)1 << (SubgridSize - 1 - i)) > (mask & ~(mask - 1)))                {                    // the proposed new puzzle, was already tested, in principle,                    //    -- if the cellList[i] given can be safely removed, then                    //       a data item for 'newRemovedGivensMask' will be found                    //       in the list of (new) 'workingSize - 1' sized puzzles                    //    -- the code below, does a binary search, to try to find it                    int foundAt = -1;                    int start = firstPuzzle[workingSize - 1] - 1;                    int end = endMark;                    while (end > start + 1)                    {                        int n = (start + end) >> 1;                        __int64 res = newRemovedGivensMask - data_ptr[n].removedGivensMask;                        if (!res) {                            foundAt = n;                            break;                        }                        if (res > 0)    // reverse order                            end = n;                        else                            start = n;                    }                    if (foundAt < 0) {                        // the given in 'cellList[i]' is "forced"                        dontTryMask |= ((__int64)1 << (SubgridSize - 1 - i));                        continue;                    }                    // the given was removable [ and the corresponding                    // puzzle was produced earlier, from a different                    // 'workingSize' puzzle ]                    // remember where it is, so its 'knownForcedKeepersMask'                    // can be updated when the data for this puzzle is complete                    // note: it's a subpuzzle of this puzzle as well, so a given                    // in this puzzle that's a "forced keeper", will be "forced"                    // in it as well.                    updateList[found++] = foundAt;    // "update later" list                    continue;                }                // ... this is where new puzzle can potentially be produced                solver.PullCellValue(cellList[i] / 9, cellList[i] % 9);                solver.PushElimination(cellList[i] / 9, cellList[i] % 9, solutionGrid[cellList[i]]);                solver.stopAfterOne = true;                solver.Search();                solver.stopAfterOne = false;                solver.PopElimination();                solver.PushCellValue(cellList[i] / 9, cellList[i] % 9, solutionGrid[cellList[i]]);                if (solver.solutionCount)                {                    // the given in 'cellList[i]' is "forced"                    dontTryMask |= ((__int64)1 << (SubgridSize - 1 - i));                    continue;                }                // the given in 'cellList[i]' is redundant                //  -- need to add a new puzzle for the next pass                if (used + 1 > allocCount)                {                    void *p = data_ptr;                    allocCount <<= 1;                    data_ptr = (subgrid_data *)malloc(allocCount * sizeof(subgrid_data));                    if (!data_ptr) {                        puts("no RAM");                        exit(-1);                    }                    memcpy(data_ptr, p, used * sizeof(data_ptr[0]));                    free(p);                }                ++puzzleCount[workingSize - 1];                updateList[found++] = used;    // "update later" list                data_ptr[used].removedGivensMask = newRemovedGivensMask;                data_ptr[used].knownForcedKeepersMask = 0;    // update this later                ++used;            }            // note: 'dontTryMask' kept a record of (newly revealed) "forced" givens            data_ptr[index].knownForcedKeepersMask = dontTryMask;            if (found)            {                // the current input puzzle is not minimal                // (it has 'found' valid subpuzzles, of size 'workingSize - 1')                // update the "known forced keepers" masks for those puzzles                for (i = 0; i < found; i++)                    data_ptr[updateList[i]].knownForcedKeepersMask |= dontTryMask;                continue;            }            // the current input puzzle, *is* minimal            ++minimal[workingSize];#if defined(DUMP_PUZZLES)            dump_subset_puzzle(mask);#endif        }    }    for (i = MinPuzzleSize; i <= SubgridSize; i++) {        int count = minimal[i];        if (count) {            subsetSum[i] += count;            subsetSum2[i] += (__int64)count * count;        }    }}double choose(int n1, int n2){    double x = 1.0;    while (n2 > 0)        x = x * (n1--) / (n2--);    return x;}void show_subset_stats(){    int min = MinPuzzleSize - 1;    int max = SubgridSize + 1;    int i;    while (++min < max) {        if (subsetSum[min])            break;    }    if (min == max) {        puts("no hits ?");        return;    }    while (--max >= min) {        if (subsetSum[max])            break;    }    printf("+----+----------+--------------+------------+\n");    printf("| Cl |    Count |  E(nr/grid)  | E(rel err) |\n");    printf("+----+----------+--------------+------------+\n");    for (i = min; i <= max; i++)    {        double avg = (double)subsetSum[i] / sampleCount;        double dev = sqrt((double)subsetSum2[i] / sampleCount - avg * avg);        double est = avg * choose(81, i) / choose(SubgridSize, i);        double err = dev  / sqrt((double)sampleCount) * choose(81, i) / choose(SubgridSize, i);        printf("| %2d | %8I64d | %12.4e |  %6.2f%%   |\n",            i, subsetSum[i], est, ((est > 0) ? 100.0 * err / est : 0.0));    }    printf("+----+----------+--------------+------------+\n");#if defined(SHOW_COUNT_STATS)    for (i = min; i <= max; i++)        printf("| %2d | %8I64d | %12I64d |            |\n",            i, subsetSum[i], subsetSum2[i]);    printf("+----+----------+--------------+------------+\n");#endif}bool stop;void timer(void *){    HANDLE h = CreateEvent(NULL, TRUE, FALSE, NULL);    WaitForMultipleObjects(1, &h, FALSE, TIMER_VAL);    stop = true;}void main(){    SetPriorityClass(GetCurrentProcess(), IDLE_PRIORITY_CLASS);    SetThreadPriority(GetCurrentThread(), THREAD_PRIORITY_IDLE);    load_random_grid_data();    RNG.SeedRandom();    int i;    for (i = 0; i < 81; i++)        cellList[i] = i;#if defined(DUMP_PUZZLES)    fp_dump = fopen("dump.txt", "wt");#endif    _beginthread(timer, 0, 0);    __int64 t0, t1, dummyt[3];    __int64 t0k, t1k;    GetProcessTimes(GetCurrentProcess(), (FILETIME *)&dummyt[0],        (FILETIME *)&dummyt[1], (FILETIME *)&t0k, (FILETIME *)&t0);    while (!stop)    {        choose_random_grid(solutionGrid);        ++sampleCount;        solver.Reset();        for (i = 0; i < SubgridSize; i++)        {            int n = i + RNG.rand(81 - i);            int cell = cellList[n];            cellList[n] = cellList[i];            cellList[i] = cell;            solver.PushCellValue(cell / 9, cell % 9, solutionGrid[cell]);        }        solver.Search();        if (solver.solutionCount > 1)            continue;        ++subgridHits;        do_subset_test();    }    GetProcessTimes(GetCurrentProcess(), (FILETIME *)&dummyt[0],        (FILETIME *)&dummyt[1], (FILETIME *)&t1k, (FILETIME *)&t1);    double DT = (t1 - t0) / 10000000.0;    double DTk = (t1k - t0k) / 10000000.0;#if defined(DUMP_PUZZLES)    fclose(fp_dump);#endif    printf("%I64d samples\n", sampleCount);    printf("%I64d size %d hits\n", subgridHits, SubgridSize);    printf("\n");    show_subset_stats();    printf("\n");    printf("CPU time: %.2f sec\n", DT + DTk);}`

Regards,
Blue.
blue

Posts: 779
Joined: 11 March 2013

### Re: ~TE(1)

Hi Denis,

blue wrote:I've got the code running for another 20 hrs (2 x 10).
No time to comment on other aspects now ...

Bad news: Only one new puzzle was produced in that run.
I'm running it for 24 hrs now (3 x 8), to check my sanity -- 3 hits after 9.75 hrs combined, so far.
I don't know what to think

Here's the new puzzle from the last 20 hr run:

Code: Select all
`size 24:8..76......5...8...7..4.1....9.37......1.43.........2.6..5....1..2.....5..1.76.3. SER=9.3`

Regards,
Blue.

P.S. Pop Quiz for stats buffs:
1) using maximum likelihood estimation, what's the estimated average hit rate giving 2 hits in 4 hours, 2 hits in 8 hours, 3 hits in 8 hours, 0 hits in 10 hours, 1 hit in 10 hours, and 3 hits in 10 hours.
2) given the value from (1), what are the probablities of getting a) >=2 hits in 4 hours b) <= 1 hit in 20 hours ?

Edit: fixed two erroneous 10 hour times

I'm off to bed ...
Last edited by blue on Thu Jun 27, 2013 5:23 am, edited 1 time in total.
blue

Posts: 779
Joined: 11 March 2013

### Re: ~TE(1)

Hi Blue,

Thanks. Here is the BpB classification:

Code: Select all
`size 24:8..76......5...8...7..4.1....9.37......1.43.........2.6..5....1..2.....5..1.76.3. SER=9.3  in gT&E, i.e. p=1`

As for the apparently inconsistent stats, I don't know precisely all the conditions of your runs. But if the ~T&E puzzles are very unevenly distributed among the complete grids, it may be due to the different choices of these.

blue wrote:I'm off to bed ...

Don't dream of Sudoku !
denis_berthier
2010 Supporter

Posts: 1262
Joined: 19 June 2007
Location: Paris

PreviousNext