Counting minimal puzzles: subsets, supersets, etc.

Everything about Sudoku that doesn't fit in one of the other sections

20x17 #puzzles corrected

Postby dobrichev » Wed Jul 31, 2013 9:26 pm

These are the corrected counts for the grid with 20 17s
Code: Select all
        20 17
       294 18
     32748 19
  12141180 20
dobrichev
2016 Supporter
 
Posts: 1863
Joined: 24 May 2010

Re: Counting minimal puzzles: subsets, supersets, etc.

Postby coloin » Wed Jul 31, 2013 11:14 pm

Estimated counts for minimal puzzles in the 20-17 grid...........
Code: Select all
+----+-----------+--------------+------------+
| Cl |     Count |  E(nr/grid)  | E(rel err) |
+----+-----------+--------------+------------+
| 17 |        24 |  2.1983e+001 |   39.97%   |
| 18 |        56 |  2.9844e+002 |   26.96%   |
| 19 |       963 |  3.2332e+004 |    6.60%   |
| 20 |     52345 |  1.2107e+007 |    0.83%   |
+----+-----------+--------------+------------+     [blue]

I am very relieved that the results fit very easy within the rel. error ...... !

this has implications now for the total number of 18-puzzles - Afmob may well be right !

C
coloin
 
Posts: 2494
Joined: 05 May 2005
Location: Devon

Re: Comments on my computations

Postby denis_berthier » Thu Aug 01, 2013 1:39 am

Afmob wrote:As for the results, one can often find situations where one method is better than the other and vice versa.

For high numbers of clues, subset is "probably" better.
In all the other cases, I can't see how subset or superset could be better if you didn't erroneously discard the sampling errors due to your small numbers of complete grids. Starting from only 100k (subset) or 10k (superset) grids sets a bound on the precision of any calculations you can do; the level of precision claimed for your error estimates is much below this lower bound and therefore not valid.


Afmob wrote:For example if I look at the relative error for the number of 22 clue minimals you got a relative error of 1.23% and blue got about 0.05% which is a major difference. Note that I choose blue's results since they have no sample error and the computation time is not as large is mine was.

As I said in my previous post, if we concentrate on low clues, the controlled-bias generator can be made drastically much faster by increasing the number of clues deleted from the start without checking if the resulting puzzle is minimal.
denis_berthier
2010 Supporter
 
Posts: 4213
Joined: 19 June 2007
Location: Paris

Re: Comments on my computations

Postby Afmob » Thu Aug 01, 2013 3:56 am

denis_berthier wrote:Starting from only 100k (subset) or 10k (superset) grids sets a bound on the precision of any calculations you can do.

Please read my post again. I did not only use 100k or 10k grids but about 60.2 billion (subset) and 2.1 billion (superset) different random unbiased grids otherwise I wouldn't have made this large computation. The only sample error I introduce is that I use each grid 10 times so that the generator takes up only a small amount of the computation time.
Afmob
 
Posts: 132
Joined: 28 June 2011

Re: Comments on my computations

Postby blue » Thu Aug 01, 2013 4:02 am

Hi Denis,

denis_berthier wrote:
Afmob wrote:For example if I look at the relative error for the number of 22 clue minimals you got a relative error of 1.23% and blue got about 0.05% which is a major difference. Note that I choose blue's results since they have no sample error and the computation time is not as large is mine was.

As I said in my previous post, if we concentrate on low clues, the controlled-bias generator can be made drastically much faster by increasing the number of clues deleted from the start without checking if the resulting puzzle is minimal.

I'm not sure how low you mean when you say "low clues", but if we focused on the size 22 estimate, the upper limit on the speed that you could achieve using the controlled bias method, is the speed that you would get by choosing random size 22 subgrids of random grids, and checking them for being valid puzzles. Note: I'm ignoring the time to check whether the valid cases are also minimal, which is negligible in comparison, and also the time to generate the random samples.
From the number of minimal 22's per grid, you can calculate the average number of (grid, size 22 subgrid) samples that you need to test before finding a valid 22. It''s choose(81,22)/(1.6 * 10^11), or about 230,000,000 samples. In order to get an relative error of 0.05% in the estimate (one part in 2000), you would need to run until you had 2000*2000 = 4,000,000 "hits". The required number of samples then, would be around 9.2 * 10^14. If you said that you could check 120,000 22's per second, for being valid puzzles, the total time required, would be around 2.1 million CPU hours. With my "(size 30) subset method" code, it took 300 hours to get that degree of precision. If a 1% relative error was acceptable, then both times would drop by a factor of 400, but the ratio would remain the same.

Regards,
Blue.
blue
 
Posts: 1045
Joined: 11 March 2013

Re: Comments on my computations

Postby denis_berthier » Thu Aug 01, 2013 4:13 am

Afmob wrote:
denis_berthier wrote:Starting from only 100k (subset) or 10k (superset) grids sets a bound on the precision of any calculations you can do.

Please read my post again. I did not only use 100k or 10k grids but about 60.2 billion (subset) and 2.1 billion (superset) different random unbiased grids otherwise I wouldn't have made this large computation. The only sample error I introduce is that I use each grid 10 times so that the generator takes up only a small amount of the computation time.


OK, I had overlooked this part of your post.

However, my main objection, relative to the high inefficiency of the method for computing the mean of other random variables still holds.
denis_berthier
2010 Supporter
 
Posts: 4213
Joined: 19 June 2007
Location: Paris

Re: Comments on my computations

Postby denis_berthier » Thu Aug 01, 2013 4:45 am

Hi Blue,

blue wrote:
denis_berthier wrote:
Afmob wrote:For example if I look at the relative error for the number of 22 clue minimals you got a relative error of 1.23% and blue got about 0.05% which is a major difference. Note that I choose blue's results since they have no sample error and the computation time is not as large is mine was.

As I said in my previous post, if we concentrate on low clues, the controlled-bias generator can be made drastically much faster by increasing the number of clues deleted from the start without checking if the resulting puzzle is minimal.

I'm not sure how low you mean when you say "low clues", but if we focused on the size 22 estimate,


I consider 22 as extremely low: 4 standard deviations below the real mean. But, your interpretation of low is explainable in the context of the above discussion.

In one of the last posts p. 43 of the pdf in the "real distribution" thread, I wrote that choosing a random grid (time ~0), randomly deleting 81-n candidates (again time ~0) would produce a minimal puzzle once in 2.2946e+08 cases for n=22 (which coincides with your estimate).
So, in this extreme case, you're probably right.


Indeed, I've never been very interested in the extreme values*, the main purposes of my brief incursion into the domain of puzzle generation being to compute the distributions of various random variables - a goal for which the subset/superset method is totally unfit.
For me, as it is only very weakly correlated with the difficulty of a puzzle (by whichever method you measure it), the number of clues has never been a goal in itself.

(*) but I appreciate that we now have good results for them, thanks to you and Afmob.
denis_berthier
2010 Supporter
 
Posts: 4213
Joined: 19 June 2007
Location: Paris

Re: Comments on my computations

Postby Red Ed » Thu Aug 01, 2013 8:46 am

denis_berthier wrote:... to compute the distributions of various random variables - a goal for which the subset/superset method is totally unfit.
A fairer summary would be that subset/superset methods are advantageous in the computation of "cheap" random variables, whereas path probing (controlled bias generation) is advantageous for "expensive" random variables. Further, subset/superset methods are especially powerful in the tails of the distribution, whereas path probing is competitive (or better if the random variable is sufficiently expensive) around the centre or in computation of the overall mean.

I would recommend the continued use of subsets/supersets in this thread, given that the focus here is on the whole number-of-clues distribution.
Red Ed
 
Posts: 633
Joined: 06 June 2005

Re: Comments on my computations

Postby denis_berthier » Thu Aug 01, 2013 9:35 am

Red Ed wrote:
denis_berthier wrote:... to compute the distributions of various random variables - a goal for which the subset/superset method is totally unfit.
A fairer summary would be that subset/superset methods are advantageous in the computation of "cheap" random variables, whereas path probing (controlled bias generation) is advantageous for "expensive" random variables.

No. Once more, this is ignoring the difference between generating a sample and using it.
Generation is a one shot business. And, considering the numbers of puzzles generated by each method, I'd say that this business is over (and indeed much beyond what's useful) as long as distributions of RVs are concerned.
Use has to be repeated for every RV.
The 500x computation time ratio is true of "cheap" random variables as well as of "expensive" ones. When you have to repeat a computation billions instead of millions of times, it is never "cheap".


Red Ed wrote:subset/superset methods are especially powerful in the tails of the distribution

OK. And it's great to have these results (even though they are useless for the previous purpose).
denis_berthier
2010 Supporter
 
Posts: 4213
Joined: 19 June 2007
Location: Paris

Re: Comments on my computations

Postby Red Ed » Thu Aug 01, 2013 10:54 am

denis_berthier wrote:Generation is a one shot business. And, considering the numbers of puzzles generated by each method, I'd say that this business is over (and indeed much beyond what's useful) as long as distributions of RVs are concerned.

Fair enough, although to my taste it's not over because, for example, I'm interested in estimates for the total number of 18s and 35s+.
Red Ed
 
Posts: 633
Joined: 06 June 2005

Parallel superset computation

Postby Afmob » Fri Aug 02, 2013 10:53 am

I've started another large parallel computation which will run for 3 days. The goal is to find a better estimate for the number of 18 clue minimals. At first the idea was to use eleven's idea and check some random grids for all 18 clue minimals which is the same as the superset method starting with an empty subset. The problem with this approach is that we would only use a small number of samples (< 10,000) and that the time to analyze a sample can vary between minutes and hours.

But if the method is similar to the superset method why not use a larger starting subset to increase the number of samples? So I did this, tested several starting sizes (between 2 and 5) and took the one with the best results after 12 hours which was 3. We should get more than 200,000 samples.

To get some minimal UA I did not compute the solutions of the subset but I just took the solution grid and looked for UA by removing tuples of numbers and boxes. Also this time I don't check for minimality at every level but only when a valid puzzle has been found. Furthermore I stop adding numbers after 18 clues.

Superset results:
Code: Select all
Computation time: 31x72 hours

235,091 samples
235,091 size 3 subsets
    814 valid minimal puzzles

+----+-------+------------+----------------+
| Cl | Count | E(nr/grid) | E(rel err)*100 |
+----+-------+------------+----------------+
| 18 |   814 |  3.620e-01 |      5.813e+00 |
+----+-------+------------+----------------+

I will update the table on a daily basis. I expect to get the relative error below 5% but I don't expect to find a 17 clue minimal.
Last edited by Afmob on Mon Aug 05, 2013 8:14 am, edited 7 times in total.
Afmob
 
Posts: 132
Joined: 28 June 2011

Super seeds supercede *sets subsets

Postby Red Ed » Fri Aug 02, 2013 4:34 pm

Can we change terminology a bit?

    Instead of picking a solution grid, an initial subset, then subsets/supersets of that initial subset
    can we please pick a solution grid, a seed pattern (or just "seed"), then subsets/supersets of that seed.
The more interesting posts I read, the more I realise that "(initial) subset" risks confusion with the subsets/supersets that are generated from it.
Red Ed
 
Posts: 633
Joined: 06 June 2005

Postby Afmob » Mon Aug 05, 2013 8:21 am

The 18 clue computation has just finished.

The number of 18 clue minimals is even smaller than I've imagined. Sadly, I didn't get the relative error below 5% but that is due to some large clusters like this one:
Hidden Text: Show
...........8.6.4...97..3...6..41...............5....73.8...7...4.....1.....5...9.
.............6.4...97..3...6..41...............5....73.....7..54.2...1......8.2..
.......2.....6.4...97..3...6..41...............5....73....27..54....91...........
.......2.....6.4...97..3...6..41...............5....73....27..54.....1......8....
.......2.....6.4...97..3...6..41...............5.9..73.....7...4.....1.....5..2..
.......2.....6.4...97..3...6..41...............5....73.....7...4....91.....5..2..
.......2.....6.4...97..3...6..41...............5....73....27...4.....1......8...6
.............6.4...97..3...6..41...............5....73....27..54.....1......8.2..
.............6.4...97..3...6..41...............5....739....7..54.....1......8.2..
.............6.4...97..3...6..41....8..........5....73.....7..54.....1......8.2..
.............6.4...97..3...6..41......9........5....73.....7..54.....1......8.2..
.............6.4...97..3...6..41...............52...73.....7..54.....1......8.2..
.............6.4...97..3...6..41...............5.9..73.....7..54.....1......8.2..
.............6.4...97..3...6..41...............5....73.....7..54....91......8.2..
.............6.4...97..3...6..41...............5.9..73.....7..54.....1.....5..2..
.............6.4...97..3...6..41...............5....73.....7..54....91.....5..2..
........8....6.4...97..3...6..41...............5....73.....7...4....91.....5..2..
.............6.4...97..3...6..41...............5....7398...7...4.....1.....5...9.
.............6.4...97..3...6..41....8..........5....73.8...7...4.....1.....5...9.
.............6.4...97..3...6..41......9........5....73.8...7...4.....1.....5...9.
.............6.4...97..3...6..41..5............5.9..73.....7...4.....1.....5..2..
.............6.4...97..3...6..41...2...........5.9..73.....7...4.....1.....5..2..
.............6.4...97..3...6..41..5............5....73.....7...4....91.....5..2..
.............6.4...97..3...6..41...2...........5....73.....7...4....91.....5..2..
.............6.4...97..3...6..41...............5....73.....7...45...91.....5..2..
.............6.4...97..3...6..41...............5....73.....7...4....918....5..2..
.............6.4...97..3...6..41...............52...73.....7...4.....18....5..2..
....5..........4...97..3...6..41...............5.9..73.....7..54..6..1........2..
.......2.......4...97..3...6..41...............5.9..73.8...7..54..6..1...........

Doing this kind of computation to get an estimate for the number of 17 clue minimals won't be useful since we would only get 1 or 2 minimals (if any at all), so the relative error would be way too large.
Afmob
 
Posts: 132
Joined: 28 June 2011

Re: Counting minimal puzzles: subsets, supersets, etc.

Postby eleven » Mon Aug 05, 2013 11:51 am

Nice to see, that my 2 billion estimate was that good :)
eleven
 
Posts: 3151
Joined: 10 February 2008

#clues vs difficulty

Postby denis_berthier » Tue Aug 06, 2013 5:35 am



#clues vs difficulty


Hi, Blue & Afmob

It is known that there is no relation between the number of clues and the difficulty of a puzzle, in the sense that the correlation coefficient between the number of clues and the SER or W rating is about 0.12 (which is generally considered as no meaningful correlation). (See the pdf pages in the "real distribution" thread.)

However, It is also known that:
- the 17s are generally easier than the mean,
- the mean number of clues for the hardest is less than the mean.
(This isn't contradictory with the above, as none of these collections can play any role in computing the correlation coefficient: they are much too small wrt to the whole set of puzzles.)

What can you say about the mean SER for the 18s obtained from your computations ?

More generally (but this may be a huge computation, due to correlation), how does the mean SER or the SER distribution vary with the #clues far from the mean ?
denis_berthier
2010 Supporter
 
Posts: 4213
Joined: 19 June 2007
Location: Paris

PreviousNext

Return to General