Counting minimal puzzles: subsets, supersets, etc.

Everything about Sudoku that doesn't fit in one of the other sections

Re: ~TE(1)

Postby blue » Mon Jul 01, 2013 11:43 pm

eleven wrote:
blue wrote:These are the final numerical results for ~TE(1) and minimal puzzles ...

Thanks blue, for these investigations. They showed me, that yet another sudoku conjecture i had, was wrong, namely that ~TE(1) puzzles would have an average ER beyond 9.5. With hindsight that is easily explained. My "empiric results" were as biased as the sets i worked on.

I should mention that there's still an outside shot that you were right.
Only 3 puzzles with size >= 25 were produced, and none of size 27. Even though I didn't find many, the numbers were (very weakly) suggesting that most ~TE(1) puzzles have sizes in a range that I couldn't cover well (26,27,?,?). Size 27 was going to be a major problem. : ) :

Regards,
Blue.
blue
 
Posts: 1045
Joined: 11 March 2013

Re: ~TE(1)

Postby denis_berthier » Tue Jul 02, 2013 2:37 am

blue wrote:
eleven wrote:
blue wrote:These are the final numerical results for ~TE(1) and minimal puzzles ...

Thanks blue, for these investigations. They showed me, that yet another sudoku conjecture i had, was wrong, namely that ~TE(1) puzzles would have an average ER beyond 9.5. With hindsight that is easily explained. My "empiric results" were as biased as the sets i worked on.

I should mention that there's still an outside shot that you were right.
Only 3 puzzles with size >= 25 were produced, and none of size 27. Even though I didn't find many, the numbers were (very weakly) suggesting that most ~TE(1) puzzles have sizes in a range that I couldn't cover well (26,27,?,?). Size 27 was going to be a major problem. : ) :


I've added a post about the distribution of clues in the grey zone: http://forum.enjoysudoku.com/the-sudoku-grey-zone-t31143-29.html. This doesn't directly answer your questions, but it indicates that blue is probably right about the number of clues.

As for the SER, as it is based on rules including some forms of uniqueness AND Subsets, it can't be a surprise that some puzzles having Subset cases not subsumed by whips or braids are not in T&E(1); what's a surprise for me is that Blue found so many such cases (in proportion). But again, this may be due to a strong bias in the number of clues.
denis_berthier
2010 Supporter
 
Posts: 4213
Joined: 19 June 2007
Location: Paris

Re: Class sampling

Postby Red Ed » Wed Jul 03, 2013 8:29 pm

Red Ed wrote:
I wrote:I've not tested this yet, so it might turn out to be a rubbish idea, but I thought I'd better write the theory down or I'd never get round to it.
Preliminary investigation suggests the idea will work in the tails of the distribution.

The benefit appears, sadly, to be rather small. The following applies to finding 21-clue minimals within 28-clue subgrids.

Let the "gnarliness" of a 28-clue subgrid be the sum, over units (boxes, rows and columns), of occupancy^2. Say gnarliness<297 is "very smooth", else <305 is "smooth", else <313 is "ordinary"; else <323 is "rough", else is "very rough". That's five classes. We can find the class probabilities to within a reasonable degree of accuracy by random trials (ignore solution grids, just generate 28-cell patterns): they are 16.91%, 20.33%, 22.25%, 21.48% and 19.03% respectively ... or near enough.

Now spend a bit of time running the (28,21)-subsets method for the 28-clue subgrid restricted to a single class. I left it long enough for the counts to hit the hundreds in order to get a reasonably realistic view of critical parameters like Rate and, particularly, Sigma. The results for each class are tabulated below.
Code: Select all
+----------------------+------+-----------+-------+------------+--------+
|    Class Name (Prob) | Time |    Trials | Count | E(nr/grid) |  E(RE) |
+----------------------+------+-----------+-------+------------+--------+
| VERY SMOOTH (0.1691) |  600 |   6356547 |  2528 | 4.5802e+09 |  5.22% |
|      SMOOTH (0.2033) |  600 |   8369783 |  1282 | 1.7640e+09 |  6.45% |
|    ORDINARY (0.2225) |  600 |  11622440 |   698 | 6.9165e+08 |  8.00% |
|       ROUGH (0.2148) | 1200 |  26806669 |   564 | 2.4231e+08 |  7.50% |
|  VERY ROUGH (0.1903) | 1800 |  42531861 |   259 | 7.0131e+07 | 12.20% |
+----------------------+------+-----------+-------+------------+--------+

From those results, one can work out the amount of time one would normally spend in each class (it will be more in the smoother classes because they yield more puzzles) and the optimum time one should spend in each class. Skipping the boring details of the calculation, we end up at:
Code: Select all
+----------------------+------------+------------+
|    Class Name (Prob) | Usual Time | Opt'm Time |
+----------------------+------------+------------+
| VERY SMOOTH (0.1691) |   30.97%   |   48.01%   |
|      SMOOTH (0.2033) |   23.52%   |   27.47%   |
|    ORDINARY (0.2225) |   16.94%   |   14.62%   |
|       ROUGH (0.2148) |   14.69%   |    6.56%   |
|  VERY ROUGH (0.1903) |   13.89%   |    3.35%   |
+----------------------+------------+------------+

Finally, from the timings and experimental data, one can work out the expected variance of the E(nr/grid) estimate in the "usual" and "optimum" cases:
Code: Select all
+------------+
| Conclusion |
+------------+--------------------------+
|   Usual 1-second variance: 5.2275e+18 |
| Optimum 1-second variance: 4.2551e+18 |
|           Speed-up factor: 1.2285     |
+---------------------------------------+

We see, in conclusion, that class sampling gives a speed-up of about 23% compared to the usual version of the (28,21)-subsets method.

Oh well, it was an interesting diversion :roll:
Red Ed
 
Posts: 633
Joined: 06 June 2005

Re: Counting minimal puzzles: subsets, supersets, etc.

Postby blue » Thu Jul 04, 2013 1:56 am

Hi Red Ed,

Interesting stuff ... thanks for pursuing it to the end.
Thanks for publishing your random grid code too.
I feel less guilty about not publishing mine, now.

I've been extremely busy over the past week.
I'm finally finding time to review by supersets code ... something I hoped to have finished several days ago.

For Afmob,

Congratulations on finding ways to speed up your supersets code. I had questions to ask, and something to comment about (?) ... but it can wait. Maybe one question for now: It sounded like you were finding a way to merge the "UA hitting" idea, with the BFS strategy. Can you give a few details about how you organized that ?

Best Regards,
Blue.
blue
 
Posts: 1045
Joined: 11 March 2013

Superset BFS

Postby Afmob » Thu Jul 04, 2013 5:31 am

blue wrote: Maybe one question for now: It sounded like you were finding a way to merge the "UA hitting" idea, with the BFS strategy. Can you give a few details about how you organized that ?

I do not actually use BFS in the traditional sense in both of my implementations of the subset und superset method. Let me describe my approach for the superset method:

Let U be an unhit, minimal UA at some recursion level and bPlace be a bitmask which tells you which cells can be added and which lead to non-minimality or have already been dealt with at a lower recursion level (similar to the deadClues mask of McGuire). Now you go through all cells of U (but ignoring cells which have been marked by bPlace) and check whether adding them violates minimality. Note that you do not add them at this point, that is going into the next recursion level. After all cells of U have been checked, you add allowed cells of U (of course only one cell at a time) and go into the next recursion level. So technically, this is still DFS but by first checking all cells of U, you can use the same bPlace mask for the next recursion level for all allowed clues of U and furthermore cells which violate minimality do not have to be checked again in the next recursion level if they appear in another unhit UA.
Afmob
 
Posts: 132
Joined: 28 June 2011

Superset results

Postby Afmob » Mon Jul 08, 2013 9:27 am

Here are the results of the superset method starting with 26 clue subsets running for 70 hours. Note that I used 100,000 random grids from Red Ed's unbiased generator.
Code: Select all
589,237,027 samples
 33,164,281 minimal 26 hits
    888,133 valid minimal puzzles

+----+----------+--------------+------------+
| Cl |    Count |  E(nr/grid)  | E(rel err) |
+----+----------+--------------+------------+
| 26 |      728 |   1.399e+015 |    3.71%   |
| 27 |    21536 |   1.533e+015 |    0.93%   |
| 28 |   141725 |   7.204e+014 |    0.45%   |
| 29 |   316354 |   1.664e+014 |    0.35%   |
| 30 |   277249 |   1.944e+013 |    0.38%   |
| 31 |   107763 |   1.219e+012 |    0.57%   |
| 32 |    20637 |   4.376e+010 |    1.20%   |
| 33 |     2036 |   9.157e+008 |    3.27%   |
| 34 |       99 |   1.048e+007 |   12.82%   |
| 35 |        6 |   1.633e+005 |   40.83%   |
+----+----------+--------------+------------+

This confirms blue's results. I started with 26 clue subsets since prior to this computation I made three test runs starting with 25,26 and 27 clue subsets where the 26 clue subsets yielded the best results (lowest error estimate) for 30+ clue valid puzzles after 16 hours.

Sadly, it didn't find a 36 clue valid puzzle. The main problem with the supset method remains the low number of valid puzzles you get that is most minimal subsets do not yield a valid minimal superset. I had a further look at this and for some cases there are unhit UA where every additional clue destroys the minimality. Those cases are usually handled quite fast. But there are other cases which aren't as trivial as this one. We need another necessary condition besides minimality of the subset to considerably speed up the method.
On another note, I made some changes to JSolve which sped up the superset method about 5%. If anyone is interested, I can go into details.
Last edited by Afmob on Tue Jul 09, 2013 6:28 am, edited 1 time in total.
Afmob
 
Posts: 132
Joined: 28 June 2011

Re: Superset results

Postby Red Ed » Mon Jul 08, 2013 6:35 pm

Afmob wrote:Sadly, it didn't find a 36 clue valid puzzle.

... but you found a few 35s, which is waaay better than I ever managed. Well done!

With E(rel err), are you treating this experiment as 100,000 independent random variables or as 589,237,027? It should be the former, with each r.v. being the mean of the results from ~5892 random subgrids of the same solution grid: the effect will be to worsen the E(rel err) results if you have not already made that correction. It would be interesting to know how much worse, though, as re-using solution grids is certainly attractive from the point of view of pure speed.

(Yet another edit ...) One other thing: have you experimented with ZSolver at all?
Red Ed
 
Posts: 633
Joined: 06 June 2005

Postby Afmob » Mon Jul 08, 2013 7:56 pm

Red Ed wrote:With E(rel err), are you treating this experiment as 100,000 independent random variables or as 589,237,027? It should be the former, with each r.v. being the mean of the results from ~5892 random subgrids of the same solution grid: the effect will be to worsen the E(rel err) results if you have not already made that correction. It would be interesting to know how much worse, though, as re-using solution grids is certainly attractive from the point of view of pure speed.

Damn, I haven't thought about that. Let me just see if I got the math right. I calculate the mean and variance every time I've checked 100,000 samples. After the computation I calculate the mean of each "100,000 samples"-mean and -variance. Then I use the relative error formula with n being 100,000. Is this right? If so, the E(nr/grid) would not be affected but the relative error would be bounded by 1/sqrt(100,000) * sqrt(Var_t(V))/E_t(V) where Var_t and E_t are the true mean and variance. Hence it does not converge to zero if I use those 100,000 grids for an unlimited time. Of course it can't be zero as I haven't used all possible grids but the bias of the grids seems to have a small impact on the superset method, so I would guess that this formula is too crude. One more thought, the relative error I've computed would be correct if I wanted to estimate the mean number of minimals for those 100,000 grids, right?

I use ZhouSolve for the subset method as I only need to check the minimality of valid puzzles where as the superset method requires checking the minimality of multiple solution puzzles. Because of this I need to be able to directly manipulate the candidates of a cell which is easy with JSolve but I haven't figured out how to do this with ZhouSolve as the code is rather obscure in comparison to JSolve.

Edit: I think my description of calculating the proper error estimate is off. Instead of using the mean after every 100,000 samples I need to calculate the mean for each of those 100,000 samples and the mean of the squares to calculate the variance.
Afmob
 
Posts: 132
Joined: 28 June 2011

Re:

Postby Red Ed » Mon Jul 08, 2013 9:44 pm

Afmob wrote:Edit: I think my description of calculating the proper error estimate is off. Instead of using the mean after every 100,000 samples I need to calculate the mean for each of those 100,000 samples and the mean of the squares to calculate the variance.

Yes, something like that. I started typing a detailed response but messed it up. I'll return to it tomorrow evening when I've had less wine ... :oops:
Red Ed
 
Posts: 633
Joined: 06 June 2005

Re: Superset results

Postby blue » Tue Jul 09, 2013 6:19 am

Hi Afmob,

Afmob wrote:
Code: Select all
589,237,027 samples
 33,164,281 minimal 26 hits
    888,133 valid minimal puzzles

+----+----------+--------------+------------+
| Cl |    Count |  E(nr/grid)  | E(rel err) |
+----+----------+--------------+------------+
| 26 |      728 |  1.5094e+015 |    3.71%   |
| 27 |    21536 |  1.5166e+015 |    0.93%   |
| 28 |   141725 |  7.1639e+014 |    0.45%   |
| 29 |   316354 |  1.6510e+014 |    0.35%   |
| 30 |   277249 |  1.9411e+013 |    0.38%   |
| 31 |   107763 |  1.2278e+012 |    0.57%   |
| 32 |    20637 |  4.1835e+010 |    1.20%   |
| 33 |     2036 |  8.8412e+008 |    3.27%   |
| 34 |       99 |  1.8003e+007 |   12.82%   |
| 35 |        6 |  3.2147e+005 |   40.83%   |
+----+----------+--------------+------------+

This confirms blue's results.

A little too closely maybe :!:

Your numbers look to be these:

Code: Select all
+----+----------+--------------+
| Cl |    Count |  E(nr/grid)  |
+----+----------+--------------+
| 26 |      728 |  1.3988e+015 |
| 27 |    21536 |  1.5326e+015 |
| 28 |   141725 |  7.2041e+014 |
| 29 |   316354 |  1.6635e+014 |
| 30 |   277249 |  1.9439e+013 |
| 31 |   107763 |  1.2186e+012 |
| 32 |    20637 |  4.3757e+010 |
| 33 |     2036 |  9.1572e+008 |
| 34 |       99 |  1.0477e+007 |
| 35 |        6 |  1.6328e+005 |
+----+----------+--------------+

They do agree.

Cheers,
Blue.

P.S. I've been running about the same number of samples, across 3 cores, with all random grids, since seeing your post.
It should finish in ~2 hours.
Your code looks to be even faster now :!: ... good job.
blue
 
Posts: 1045
Joined: 11 March 2013

Postby Afmob » Tue Jul 09, 2013 6:32 am

Hi blue,

thanks for noting this mistake! I usually copy & paste former tables and then change the numbers accordingly, but it seems that this time I've forgotten one column. The main speed-up is definitely removing the biased generator and using a collection of given (unbiased) grids.
Afmob
 
Posts: 132
Joined: 28 June 2011

Re:

Postby Red Ed » Tue Jul 09, 2013 7:03 am

Afmob wrote:The main speed-up is definitely removing the biased generator and using a collection of given (unbiased) grids.

And this is where we need to be careful. As soon as the collection of grids is a fixed subset of the whole, your estimator (considered as a random variable) is biased. We don't know how biased ... at 100K grids there's probably hardly anything to worry about, but the fact is we just don't know. Asymptotically, your results will converge to the number-of-minimals distribution of those 100K grids which is (but who knows how) different to that of the whole set of 6.67e21.

The same's true of my ten-minute tests, but there I was only trying to illustrate the performance of E(rel err). Here, it seems you're interested in improving the existing results on the number-of-clues distribution, so bias matters.

So, to amortise the cost of grid generation, let n, the number of independent solution grids, grow without limit, but keep r, the number of repeats on a single grid, fixed (r=10? 20? 100?). This will have worse variance than the usual r=1 case by a factor like 1+(r-1).epsilon where epsilon is something small related to correlation coefficients. You have to trade that, experimentally, for the speed improvement you get for larger r.

To get a handle on epsilon, you might try (n=1 million, r=100) versus (n=100 million, r=1). I've no idea what the result will be, btw, so very interested to see how this goes.
Red Ed
 
Posts: 633
Joined: 06 June 2005

Re: Counting minimal puzzles: subsets, supersets, etc.

Postby blue » Tue Jul 09, 2013 8:31 am

Here are the results from my run with all random grids.
It looks like they're in fine agreement -- 34 is a bit "iffy".
That's probably to be expected though, with sqrt(1/100000) = 0.32%

Code: Select all
588590741 samples
33120688 size 26 hits

+----+----------+--------------+------------+
| Cl |    Count |  E(nr/grid)  | E(rel err) |
+----+----------+--------------+------------+
| 26 |      765 |  1.4715e+015 |    3.62%   |
| 27 |    21417 |  1.5258e+015 |    0.92%   |
| 28 |   141767 |  7.2141e+014 |    0.46%   |
| 29 |   315614 |  1.6615e+014 |    0.35%   |
| 30 |   278390 |  1.9540e+013 |    0.39%   |
| 31 |   108532 |  1.2287e+012 |    0.59%   |
| 32 |    20432 |  4.3370e+010 |    1.19%   |
| 33 |     2159 |  9.7211e+008 |    3.40%   |
| 34 |      148 |  1.5680e+007 |   11.02%   |
| 35 |        5 |  1.3621e+005 |   52.92%   |
+----+----------+--------------+------------+

Regards,
Blue.
blue
 
Posts: 1045
Joined: 11 March 2013

Postby Afmob » Tue Jul 09, 2013 9:39 am

I started programming Red Ed's suggestion that is using the means of each sample (and the squares) and the first results show that this doesn't change the overall average (to be expected) and the variance (to be expected?). Hence, the relative error is just "E_rel_f * sqrt(#samples/#grids)" where E_rel_f is the false relative error I computed previously. For small computations you might get a relative error larger than 1, so I am not entirely sure that my calculations are correct. Of course, if #samples = #grids I get identical results.

blue, I noticed the "bumps" in the 34 (and maybe 33?) estimates too but nevertheless our results are in agreement. :D

Edit: Maybe the results aren't so surprising as usually sqrt(Var(V))/E(V) > 1 for the superset method. For instance for 29 clues I've got sqrt(Var(V))/E(V) = 83.3.
Afmob
 
Posts: 132
Joined: 28 June 2011

Re:

Postby blue » Tue Jul 09, 2013 4:58 pm

Hi Afmob,

Afmob wrote:Edit: Maybe the results aren't so surprising as usually sqrt(Var(V))/E(V) > 1 for the superset method. For instance for 29 clues I've got sqrt(Var(V))/E(V) = 83.3.

The important (missing) numbers, I think, are the variations in the total number of puzzle at a given size, as the grids vary. If you had a perfect estimates for those, for each of the 100000 grids, then the error estimate for the averages over all grids, would be sqrt(Var(count) / 100000). When I commented about sqrt(1/100000) being ~0.3%, I was thinking that the standard deviation in the counts (across grids) is probably less than half of the average count. With that, the error due to the 100000 grid subset, would be < 0.15%, and difficult to detect, given the sample size.

Red Ed: I'm curious to see what your analysis reveals. It's a nasty issue, given the unknowns :(

Regards,
Blue.
blue
 
Posts: 1045
Joined: 11 March 2013

PreviousNext

Return to General