Counting minimal puzzles: subsets, supersets, etc.

Everything about Sudoku that doesn't fit in one of the other sections

Re: ~TE(1)

Postby Red Ed » Wed Jun 26, 2013 9:24 pm

blue wrote:Do you get anything similar to that (for ratios) ?
That's a good test; I'll check. Might be a few days, as I want to finish off something slightly tedious first. But many thanks for sharing the code and, perhaps more importantly, the methodology. (Mine was DFS, btw, also with redundancy tracking.)
Red Ed
 
Posts: 633
Joined: 06 June 2005

Re: ~TE(1)

Postby blue » Thu Jun 27, 2013 2:59 am

Hi Ed,

You're entirely welcome.

(Mine was DFS, btw, also with redundancy tracking.)

I should have remembered that it was DFS from this post.

Best Regards,
Blue.
blue
 
Posts: 1037
Joined: 11 March 2013

Re: ~TE(1)

Postby blue » Thu Jun 27, 2013 7:00 am

blue wrote:P.S. Pop Quiz for stats buffs:
1) using maximum likelihood estimation, what's the estimated average hit rate giving 2 hits in 4 hours, 2 hits in 8 hours, 3 hits in 8 hours, 0 hits in 10 hours, 1 hit in 10 hours, and 3 hits in 10 hours.
2) given the value from (1), what are the probablities of getting a) >=2 hits in 4 hours b) <= 1 hit in 20 hours ?

Edit: fixed two erroneous 10 hour times

The answer to the (proper/fixed) questions were:
1) one hit every 4.5 hours
2a) one in 4.5
2b) one in 15.1

After another 48 hours (2 x 3 x 8) (3 cores, 8 hours, twice), they answers are looking like:
1) one "subgrid with non-T&E(1) puzzles" hit every 5.9 hours
2a) one in 6.7
2b) one in 6.9

The numbers are looking like this now:

Code: Select all
cpu time = ~88 hrs

2178445807 samples
22163963 size 30 hits

~T&E(1)
+----+-----------+--------------+------------+
| Cl |     Count |  E(nr/grid)  | E(rel err) |
+----+-----------+--------------+------------+
| 22 |         1 |  2.9168e+003 |  100.00%   |
| 23 |        10 |  2.1511e+005 |   40.00%   |
| 24 |         8 |  1.4259e+006 |   39.53%   |
+----+-----------+--------------+------------+

minimal puzzles
+----+-----------+--------------+------------+
| Cl |     Count |  E(nr/grid)  | E(rel err) |
+----+-----------+--------------+------------+
| 20 |     44739 |  3.2089e+006 |    0.84%   |
| 21 |   2937562 |  1.2852e+009 |    0.19%   |
| 22 |  55294670 |  1.6128e+011 |    0.08%   |
| 23 | 320696375 |  6.8986e+012 |    0.05%   |
| 24 | 597162205 |  1.0644e+014 |    0.03%   |
| 25 | 368951444 |  6.2472e+014 |    0.03%   |
| 26 |  78258189 |  1.4841e+015 |    0.04%   |
| 27 |   5847559 |  1.5248e+015 |    0.08%   |
| 28 |    154019 |  7.2291e+014 |    0.34%   |
| 29 |      1331 |  1.6555e+014 |    3.00%   |
| 30 |         2 |  1.2936e+013 |   70.71%   |
+----+-----------+--------------+------------+

I've got one more 3x12 hour run going. It has produced a pair of 26's from the same solution grid, and one 25.
I've figured out that I can probably get better results for sizes 25 & 26, by stopping the BFS early -- at 25 for a while, and then at 26, and sticking to a size 30 initial subgrid. Gettiing results for size 27 or higher, unfortunately, looks like a long time proposition.

denis_berthier wrote:As for the apparently inconsistent stats, I don't know precisely all the conditions of your runs. But if the ~T&E puzzles are very unevenly distributed among the complete grids, it may be due to the different choices of these.

It would be interesting to test a few of the grids where results were found, against a few random grids -- with a fair amount of time spent on each grid. In 20 hours, the number of grid samples is ~1/4 the number of ED grids. It seems like the counts would need to be wildly different for different grids, for that to be the reason. On the other hand, getting results that are highly improbable in early runs, seems to be the story of my life, when it comes to this kind of thing.

I don't know if you're still interested in these, but here are the puzzles from the last 48 (cpu) hours:

Code: Select all
(22)
.62.........4.7....1..8.93.5...7......3..........5..79.....92658...4.3..........8 SER=9.3

(23)
1.32...8..4.7.9.....7.8....7..3.....8.195...6..6...8..........2......9.......3.64 SER=4.0
37.....2...2.....71......8..3.45.........8.4..4..6....6....53.....9..75.....86..2 SER=7.3
37.....2...2.....71......8..3.45.2.......8.4..4..6....6....53.....9..75.....86... SER=4.0
37.........2.....71......8..3.45.2.......8.4..4..6....6....53.....9..75.....86..2 SER=4.0
.1..9.8....6.3.......7..4.9..9....2....5....1.6....9.32.59.1.........2..3....6.5. SER=9.0

(24)
....7.21.....1...6...6.5.......8..7...9..6..8.36.....4.95..2..7..37.....7...5.4.. SER=9.4
8..2715....3.8.......5...8..5.......9.6.3..1..7.12............3...85.12...2....7. SER=9.3
.....2..88.3...1.......6.4.9..2....6..15......8..4..7.1..7....5.4968......2.....4 SER=9.6
.....2..8..3...1.....8.6.4.9..2....6..15......8..4..7.1..7....5.4968......2.....4 SER=9.6
.1..9.8.2..6.3......37..4.9..9....2....5....1.6....9.32.59.1.........2.......6.5. SER=9.0

There are a lot of "multiple hits in the same subgrid" puzzles there -- a 3x23, a 2x24, and a 23+24.
That's new.

Regards,
Blue.
blue
 
Posts: 1037
Joined: 11 March 2013

Re: ~TE(1)

Postby denis_berthier » Thu Jun 27, 2013 10:02 am

blue wrote:
denis_berthier wrote:As for the apparently inconsistent stats, I don't know precisely all the conditions of your runs. But if the ~T&E puzzles are very unevenly distributed among the complete grids, it may be due to the different choices of these.

It would be interesting to test a few of the grids where results were found, against a few random grids -- with a fair amount of time spent on each grid. In 20 hours, the number of grid samples is ~1/4 the number of ED grids. It seems like the counts would need to be wildly different for different grids, for that to be the reason.

But the counts could indeed be wildly different. It's not only between different grids but also between different 30-clue sub-grids.

blue wrote:I don't know if you're still interested in these

I am. All this is puzzling. It's certainly worth digging a little more.


Here are the BpB classifications of the new ones.
Code: Select all
(22)
.62.........4.7....1..8.93.5...7......3..........5..79.....92658...4.3..........8 SER=9.3     in gT&E, i.e. p=1

(23)
1.32...8..4.7.9.....7.8....7..3.....8.195...6..6...8..........2......9.......3.64 SER=4.0     in gT&E, i.e. p=1    - solvable by NT + HT
37.....2...2.....71......8..3.45.........8.4..4..6....6....53.....9..75.....86..2 SER=7.3     in gT&E, i.e. p=1    - solvable by HT + Whip[4]
37.....2...2.....71......8..3.45.2.......8.4..4..6....6....53.....9..75.....86... SER=4.0     in T&E(B2), i.e. p=2    - solvable by NT + HT
37.........2.....71......8..3.45.2.......8.4..4..6....6....53.....9..75.....86..2 SER=4.0     in gT&E, i.e. p=1    - solvable by HT
.1..9.8....6.3.......7..4.9..9....2....5....1.6....9.32.59.1.........2..3....6.5. SER=9.0     in T&E(B2), i.e. p=2    - solvable by HT + Whip[4]

(24)
....7.21.....1...6...6.5.......8..7...9..6..8.36.....4.95..2..7..37.....7...5.4.. SER=9.4     in gT&E, i.e. p=1
8..2715....3.8.......5...8..5.......9.6.3..1..7.12............3...85.12...2....7. SER=9.3     in T&E(B2), i.e. p=2
.....2..88.3...1.......6.4.9..2....6..15......8..4..7.1..7....5.4968......2.....4 SER=9.6     in gT&E, i.e. p=1
.....2..8..3...1.....8.6.4.9..2....6..15......8..4..7.1..7....5.4968......2.....4 SER=9.6     in gT&E, i.e. p=1
.1..9.8.2..6.3......37..4.9..9....2....5....1.6....9.32.59.1.........2.......6.5. SER=9.0     in T&E(B2), i.e. p=2


As before, the obstruction to T&E for the "easiest" ones are due to Triplets (I didn't check the other ones). It's interesting to see that this kind of obstruction appears among the first ones.
Seeing puzzles in gT&E was what I expected. But I'm still surprised with the proportion of T&E(B2):
- 4 p=2 for 6 p=1
- if we join these with the previous ones, we get 6 p=2 for 10 p=1


blue wrote:There are a lot of "multiple hits in the same subgrid" puzzles there -- a 3x23, a 2x24, and a 23+24.
That's new.

Yes and it seems to suggest that different 30-clue grids can have very different non-T&E stats.
denis_berthier
2010 Supporter
 
Posts: 4189
Joined: 19 June 2007
Location: Paris

Superset method

Postby Afmob » Thu Jun 27, 2013 4:46 pm

I had a go at the superset method and there are two things to note here. First, not every minimal subset has a minimal valid superset, so we won't get as many puzzles as with the subset method. Furthermore you should use Unavoidable Sets (UA) for implementation, otherwise the method is way too slow. I used an algorithm similar to Gary McGuire's famous one to hit the UAs. The main difference is that when all UAs have been hit, I'll check whether the puzzle has a unique solution and if so, I'll check whether it's minimal. If it has multiple solutions I add another clue.

Here are my results after 10 minutes starting with 27 size subsets and trying to find minimals up to size 31:
Code: Select all
1526247 samples
28263   minimal 27 hits

+----+-----------+--------------+
| Cl |     Count |  E(nr/grid)  |
+----+-----------+--------------+
| 27 |         0 |            0 |
| 28 |        16 |  8.6347e+014 |
| 29 |        41 |  1.5260e+014 |
| 30 |        57 |  2.1215e+013 |
| 31 |        27 |  1.2967e+012 |
+----+-----------+--------------+

Note that I haven't implemented the error estimate yet but the number of 30 und 31 minimals looks quite promising. I started another run using minimal 28 (sub)puzzles and I try to find minimals up to size 33. It will run for about 16 hours so I'll report the results tomorrow.

Edit: I made another 10 minutes run. This time with estimates:
Code: Select all
1613063 samples
29984   minimal 27 hits

+----+-----------+--------------+------------+
| Cl |     Count |  E(nr/grid)  | E(rel err) |
+----+-----------+--------------+------------+
| 27 |         1 |   1.430e+015 |  100.00%   |
| 28 |        26 |   1.328e+015 |   24.33%   |
| 29 |        51 |   1.796e+014 |   17.43%   |
| 30 |        56 |   1.972e+013 |   17.86%   |
| 31 |        31 |   1.409e+012 |   25.19%   |
+----+-----------+--------------+------------+
Afmob
 
Posts: 132
Joined: 28 June 2011

Supersets

Postby Afmob » Fri Jun 28, 2013 6:13 am

Here are the results after 16 hours:
Code: Select all
274232530 samples
1120846   minimal 28 hits

+----+-----------+--------------+
| Cl |     Count |  E(nr/grid)  |
+----+-----------+--------------+
| 28 |        47 |  7.6230e+014 |
| 29 |       344 |  1.9239e+014 |
| 30 |       585 |  2.1812e+013 |
| 31 |       359 |  1.2954e+012 |
| 32 |        98 |  4.4201e+010 |
| 33 |        13 |  8.8840e+008 |
+----+-----------+--------------+

There are not as many counts as I've hoped and the error estimate is missing, so don't take those numbers too seriously. I'll later repeat this computation with the error estimate.
Afmob
 
Posts: 132
Joined: 28 June 2011

Re: Supersets

Postby Red Ed » Fri Jun 28, 2013 6:51 am

Afmob wrote:There are not as many counts as I've hoped and the error estimate is missing, so don't take those numbers too seriously. I'll later repeat this computation with the error estimate.

On the face of it, you have a much quicker implementation than I had: I'm not sure I ever found a 33. It would be really interesting to see what the error estimates say. Much to learn here ... efficient algorithms for exploring a subset or superset could be a topic in its own right.

Just one thing to check: what does "274232530 samples, 1120846 minimal 28 hits" mean? Is it that, from 274M initial 28-clue subgrids, you found that 1.12M had the property that no clue is implied by the others? (-- but usually not that it was a valid puzzle)
Red Ed
 
Posts: 633
Joined: 06 June 2005

Postby Afmob » Fri Jun 28, 2013 7:10 am

Red Ed wrote:Is it that, from 274M initial 28-clue subgrids, you found that 1.12M had the property that no clue is implied by the others? (-- but usually not that it was a valid puzzle)

Yes, since if a set is not minimal then all supersets of it aren't minimal too.
Red Ed wrote:On the face of it, you have a much quicker implementation than I had: I'm not sure I ever found a 33.

Thank you :). Like I said, the main idea is to use an algorithm to hit the UAs which are determined by computing all (or up to 10,000) solutions of the minimal 28 puzzle. I'll start the computation today and it will run over the weekend, so that we get better results on Monday.
Afmob
 
Posts: 132
Joined: 28 June 2011

Re:

Postby blue » Fri Jun 28, 2013 4:24 pm

Hi Afmob,

It's great to see someone else taking an interest !

I had some old code for the supersets method. When I went back to it, it had a bug that I needed to fix. Apparently, I was part of the way through trying a new speedup idea, when I dropped it. It was full of "#ifdefs", for other ideas that I had added in stages. I'm still trying to figure out why some parts of the code are present, including the part that needed the bug fix.

It uses a different method than you do -- A BFS that (in part) involves UA sets.
The comments in the code, suggest that they provided a marvelous speedup when they were introduced.

Before you do your long run, do you have time to try a 25/36 approach (or 24/35 or 24/34) ?

For the 28/33 setup that you used, I get this (today):

Code: Select all
SUPERSET_MODE

cpu time: 4.5 hrs

280396197 samples
1152769 size 28 hits

+----+----------+--------------+------------+
| Cl |    Count |  E(nr/grid)  | E(rel err) |
+----+----------+--------------+------------+
| 28 |       46 |  7.2968e+014 |   14.74%   |
| 29 |      298 |  1.6300e+014 |    6.79%   |
| 30 |      538 |  1.9619e+013 |    5.60%   |
| 31 |      338 |  1.1928e+012 |    7.22%   |
| 32 |      110 |  4.8523e+010 |   12.79%   |
| 33 |        9 |  6.0153e+008 |   36.85%   |
+----+----------+--------------+------------+

For 25/36, I got this a few days ago.

Code: Select all
SUPERSET_MODE

cpu time: 4 hrs

8906949 samples
1111940 size 25 hits

+----+----------+--------------+------------+
| Cl |    Count |  E(nr/grid)  | E(rel err) |
+----+----------+--------------+------------+
| 25 |        6 |  3.5410e+014 |   40.82%   |
| 26 |      665 |  1.5094e+015 |    5.85%   |
| 27 |     9020 |  1.5166e+015 |    2.10%   |
| 28 |    39767 |  7.1639e+014 |    1.28%   |
| 29 |    66446 |  1.6510e+014 |    1.15%   |
| 30 |    46871 |  1.9411e+013 |    1.33%   |
| 31 |    15318 |  1.2278e+012 |    2.01%   |
| 32 |     2386 |  4.1835e+010 |    3.84%   |
| 33 |      208 |  8.8412e+008 |   11.85%   |
| 34 |       16 |  1.8003e+007 |   29.32%   |
| 35 |        1 |  3.2147e+005 |  100.00%   |
+----+----------+--------------+------------+

25/3?, gave the best results for my code.

Best Regards,
Blue.
blue
 
Posts: 1037
Joined: 11 March 2013

Re: ~TE(1)

Postby blue » Fri Jun 28, 2013 4:49 pm

Hello Denis,

blue wrote:I've got one more 3x12 hour run going. It has produced a pair of 26's from the same solution grid, and one 25.
I've figured out that I can probably get better results for sizes 25 & 26, by stopping the BFS early -- at 25 for a while, and then at 26, and sticking to a size 30 initial subgrid. Gettiing results for size 27 or higher, unfortunately, looks like a long time proposition.

I've realized that the possiblity of "better results", that I mentioned, was a red herring -- better yes, but not by much.
These are the ~TE(1) puzzles from the last run. The include a 2x26 and a 2x23 hit.

Code: Select all
(23)
.....2.....3......19....2462..5..8...3..6..7..41..39......4...7..6..1.......9..1. SER=7.1
5......84.8.65..3......9...1.6.2......89...6............1..492..4.891...6........ SER=9.0
5......84.8.65..3......9...1.6.2......89...6............1..492..4.89....6.......1 SER=9.0

(24)
....4.28...1.5.....5..78..64.5.1.9...7..........7......4......25...2..7..3..9.5.1 SER=7.6

(25)
..2..8............539.6.....56....497....9.5....4...63.17.2.4...6.9........147... SER=5.0

(26)
.9...4.8.......7....86..9.1.2.1....96..7..21.75....3.6.1..9...7.4..1..6...7...... SER=9.0
.9...4.8.......7....86..9.1.2.1....96..7..2..751...3.6.1..9...7.4..1..6...7...... SER=9.0

I think I'll stop now ... interests shifting towards deciphering my old supersets code.
After several runs now, I think the statistical oddities in my initial runs were within the bounds of reason.

These are the final numerical results for ~TE(1) and minimal puzzles:

Code: Select all
cpu time: ~124 hours

3052721796 samples
31058344 size 30 hits

+----+-----------+--------------+------------+
| Cl |     Count |  E(nr/grid)  | E(rel err) |
+----+-----------+--------------+------------+
| 22 |         1 |  2.0814e+003 |  100.00%   |
| 23 |        13 |  1.9956e+005 |   35.25%   |
| 24 |         9 |  1.1447e+006 |   36.85%   |
| 25 |         1 |  1.2083e+006 |  100.00%   |
| 26 |         2 |  2.7066e+007 |  100.00%   |
+----+-----------+--------------+------------+

+----+-----------+--------------+------------+
| Cl |     Count |  E(nr/grid)  | E(rel err) |
+----+-----------+--------------+------------+
| 20 |     62837 |  3.2162e+006 |    0.70%   |
| 21 |   4130155 |  1.2895e+009 |    0.16%   |
| 22 |  77574080 |  1.6147e+011 |    0.07%   |
| 23 | 449503061 |  6.9001e+012 |    0.04%   |
| 24 | 836876390 |  1.0644e+014 |    0.03%   |
| 25 | 517172904 |  6.2490e+014 |    0.03%   |
| 26 | 109672728 |  1.4842e+015 |    0.03%   |
| 27 |   8192178 |  1.5244e+015 |    0.07%   |
| 28 |    215767 |  7.2270e+014 |    0.29%   |
| 29 |      1843 |  1.6358e+014 |    2.55%   |
| 30 |         5 |  2.3078e+013 |   44.72%   |
+----+-----------+--------------+------------+

(Not very good for <= 22 or >= 25).

~500 million 25's tested, before a ~TE(1) puzzle was produced !
The 2x26 hit, appeared after ~100 milliion.
The proper averages (for 25 & 26) must be in the one in a few 100s of millions area.

Regards,
Blue.
blue
 
Posts: 1037
Joined: 11 March 2013

Postby Afmob » Fri Jun 28, 2013 4:51 pm

Hi blue,

you've got some pretty amazing results and your code is way faster than mine (for 28/33, you get the same results in 4.5 hours where I needed 16 hours). I tried starting a 25/36 superset search but stopped since it would take way too long. My code is optimised for larger subsets, since it tries to find all minimal UA, which is only possible (in a short amount of time) if the subset has few solutions. I try to code a BFS search like you suggest, maybe it will help? At least for the subset method, your suggestion made my code three times faster.

Another bottleneck in my algorithms is the random grid generation. I coded the linear probing method and in 10 minutes I only generated about 10 millions solutions grids where as Red Ed got about 21 million and you even got 47 million solution grids.
Afmob
 
Posts: 132
Joined: 28 June 2011

Re:

Postby blue » Fri Jun 28, 2013 6:18 pm

Hi Afmob,

I try to code a BFS search like you suggest, maybe it will help? At least for the subset method, your suggestion made my code three times faster.

This is a bit of a ramble, but ...

The things I'm tracking, are a bitmask of solvable cells, and a bitmask of clues that if added, would cause one of the other clues to be redundant. Like with the subsets code, there is some code to feed "solvable cells" and "would cause a redundancy" masks, forward to the next depth. An early (?) test on puzzles at a given depth, is that if ancestor puzzles caused both flags to be set for the same cell, then the new puzzle isn't minimal. I think the (rest of) the minimality testing, actually comes last (now) in the tests at a given depth, since minimal or not, "solvable cell" information from the puzzle can be fed forward to the next depth.

The solver is called at all stages in the BFS process, and the code has some bits to track recent solutions from "is this cell solvable" tests ... "2nd" solutions where that cell value differs from the value in the designated solution. The "recent history" lists, let me skip solver calls at times, and the cells where the other solution differs from the designated solution, represent (additional) cells that can be flagged as "not solvable" -- cells that can be skipped in the process of finalizing a mask of solvable cells. UA sets are used too, to do an early collection of non-solvable cells.

The main speedup from UA sets (I think), was that if a UA set gets to the point where none of its clues cells have been added, and every clue is flagged as causing a redundancy if added, then the puzzle (minimal or not) can have no extension to a complete minimal puzzle. I don't use all of the UA sets, just the ones that are disjoint from the initial subgrid, and can be found in a quick initial scan.

My code is still somewhat of a mess, so I can't say much more than that -- fingers crossed, that I didn't mislead you above.
I need to find time this weekend, to go over it in detail.

Good luck, and have fun !

Another bottleneck in my algorithms is the random grid generation. I coded the linear probing method and in 10 minutes I only generated about 10 millions solutions grids where as Red Ed got about 21 million and you even got 47 million solution grids.

This seems to be less of a problem for the supersets method.
I have some (very old) code that can generate ~600 (correction) thousand grids per second.
It uses some large lookup tables, produced from a method for counting the ~6e+21 solution grids.
(If you can count them, you can reverse engineer the process, and use it to generate them with uniform probablity).

Best Regards,
Blue.

P.S. I forgot to add that I was very impressed that on a first try, you were able to get the results that you did, in 16 hours.
Very well done :!:
It took me a long time to get my code to anywhere near that point.
blue
 
Posts: 1037
Joined: 11 March 2013

Postby Afmob » Fri Jun 28, 2013 10:21 pm

Hi blue,
that's a lot of food for thought, so here are my thoughts on two things:

- The idea of checking the difference of the 2nd solution with the original solution is useful when all (found) UA have already been hit. This is especially useful when you start with a small subset (e.g. < 27 clues) since you will probably not find all minimal UAs of it in a reasonable time. But if I start with larger subsets I pretty much find all minimal UA with the method I've described before, so when all UAs have been hit, I don't need to add further clues and I only need to check minimality. That's also the reason why the speed difference of our algorithms is smaller when we start with a large subset.
- That's why I'm more interested in the redundancy test (that is adding a new clue makes a former clue redundant). I'll try to implement it later. Did you use a special method to check this or did you just use your minimal checker for multisolution puzzles?

On another note, is your random grid generation code available somewhere?
Afmob
 
Posts: 132
Joined: 28 June 2011

Re:

Postby blue » Fri Jun 28, 2013 11:27 pm

Hi Afmob,

that's a lot of food for thought, so here are my thoughts on two things:

Yes, and I don't know whether I should have layed my method out, as I did.
More fun to discover things one you own, eh ?

- The idea of checking the difference of the 2nd solution with the original solution is useful when all (found) UA have already been hit. This is especially useful when you start with a small subset (e.g. < 27 clues) since you will probably not find all minimal UAs of it in a reasonable time. But if I start with larger subsets I pretty much find all minimal UA with the method I've described before, so when all UAs have been hit, I don't need to add further clues and I only need to check minimality.

I think I understand what you're saying ... but have no time, now, to digest it completely.
With a BFS, you can only add one clue per layer, and that was a constraint that I needed to deal with.
You have given me food for thought, though ... and I hope to find time to consider the implications.
I highly encourage you to develop your own ideas.

- That's why I'm more interested in the redundancy test (that is adding a new clue makes a former clue redundant). I'll try to implement it later. Did you use a special method to check this or did you just use your minimal checker for multisolution puzzles

From what I (vaguely) recall, I kept putting off the minimality test, until the final moment. I had something in the code, to do an early check for (non-)minimality ... using bitmasks from the preceding layer ... to avoid adding puzzles for the next pass that would be ignored anyway, in the end. I did use puzzles with no chance of being extended to a completed to a minimal puzzle, to feed forward, information that could be used in the next pass. Part of that, would have included the fact that for a non-minimal (and likely incomplete) puzzle ... all of its +1 extensions would be non-minimal.

Re-reading the part that I quoted above ... what I did was use code that would search for a solution that did not include the cell value from the designated solution. It was the same few lines of code that were in my source code for the subsets method. In addition to that, I used the fact that if puzzle P0 was (possibly incomplete, but definitely) non-minimal, then any puzzle of the form (P0 + c), would be non-minimal. That would be the kind of information that was fed forward into the next layer.

On another note, is your random grid generation code available somewhere?

It isn't, and (unfortunately, like my solver code) it won't ever be available :(

I've followed your posts over last few the years, though, and I know that you are quite capable of independant thought, and very adept at it, too :!:. I would guess that you've experimented with code to count the ~6.2e+21 grids, on your own. Having a fast generator, doesn't seem to be important for the supersets method, and I would encourage you to stick with what you've got, and only in the end, consider the portion of time that is devoted to producing the random grids.

Best Regards,
Blue.
blue
 
Posts: 1037
Joined: 11 March 2013

Re: ~TE(1)

Postby denis_berthier » Sat Jun 29, 2013 5:10 am

Hi Blue,

Thanks for the new puzzles.
Here is the final analysis for the whole lot.

Code: Select all

format: puzzle     SER      BpB classif      - SpB classif, if finite and puzzle not in B1B=gB=gW        -- other info if any

(22)
.62.........4.7....1..8.93.5...7......3..........5..79.....92658...4.3..........8 SER=9.3      in B1B

(23)
...4...89............1683......47..38..6....1.9.......3...856...52...1....9.....2 SER=8.3     in B1B
.....3....1.4.....9...6..........6.9.2.7...454...9.18..3...4...8.......4..1...896 SER=9.1     in B2B   
....8.6...96..2...2....35...7....3.....84...1.1.7..9.............5...8.7.8.1..49. SER=8.4     in B1B    -- solvable by HT + Whip[6]
.2...65........162..4.....97...............13...1.86.5...5.9..738..6.....5.2..... SER=3.6     in B1B    -- solvable by NT + HT
1.32...8..4.7.9.....7.8....7..3.....8.195...6..6...8..........2......9.......3.64 SER=4.0     in B1B    -- solvable by NT + HT
37.....2...2.....71......8..3.45.........8.4..4..6....6....53.....9..75.....86..2 SER=7.3     in B1B    -- solvable by HT + Whip[4]
37.....2...2.....71......8..3.45.2.......8.4..4..6....6....53.....9..75.....86... SER=4.0     in B2B    -- solvable by NT + HT
37.........2.....71......8..3.45.2.......8.4..4..6....6....53.....9..75.....86..2 SER=4.0     in B1B    -- solvable by HT
.1..9.8....6.3.......7..4.9..9....2....5....1.6....9.32.59.1.........2..3....6.5. SER=9.0     in B2B    -- solvable by HT + Whip[4]
.....2.....3......19....2462..5..8...3..6..7..41..39......4...7..6..1.......9..1. SER=7.1     in B1B    -- solvable by HT + BC[3]
5......84.8.65..3......9...1.6.2......89...6............1..492..4.891...6........ SER=9.0     in B2B   
5......84.8.65..3......9...1.6.2......89...6............1..492..4.89....6.......1 SER=9.0     in B2B   

(24)
8...5...9....2.4.......4..16...8.1..5....9.6.3.......8.3.79.....5....93..6.4..5.. SER=9.2     in B2B    - in S2B
8..76......5...8...7..4.1....9.37......1.43.........2.6..5....1..2.....5..1.76.3. SER=9.3     in B1B
....7.21.....1...6...6.5.......8..7...9..6..8.36.....4.95..2..7..37.....7...5.4.. SER=9.4     in B1B
8..2715....3.8.......5...8..5.......9.6.3..1..7.12............3...85.12...2....7. SER=9.3     in B2B   
.....2..88.3...1.......6.4.9..2....6..15......8..4..7.1..7....5.4968......2.....4 SER=9.6     in B1B
.....2..8..3...1.....8.6.4.9..2....6..15......8..4..7.1..7....5.4968......2.....4 SER=9.6     in B1B
.1..9.8.2..6.3......37..4.9..9....2....5....1.6....9.32.59.1.........2.......6.5. SER=9.0     in B2B   
....4.28...1.5.....5..78..64.5.1.9...7..........7......4......25...2..7..3..9.5.1 SER=7.6     in B1B    -- solvable by HT

(25)
..2..8............539.6.....56....497....9.5....4...63.17.2.4...6.9........147... SER=5.0     in B1B    -- solvable by NQ

(26)
.9...4.8.......7....86..9.1.2.1....96..7..21.75....3.6.1..9...7.4..1..6...7...... SER=9.0     in B2B   
.9...4.8.......7....86..9.1.2.1....96..7..2..751...3.6.1..9...7.4..1..6...7...... SER=9.0     in B2B



24 puzzles:
- 14 in B1B
- 10 in B2B, among which 1 in S2B
I remain surprised by the relative proportion B2B/B1B, I would have thought that it was smaller (the ratios B7B/B6B, B6B/B5B or B5B/B4B are much smaller).


9 of the 24 puzzles can be solved by HT and 1 by NQ (this includes all those with SER < 9.0). So, at this level immediately above T&E, Subsets play a large role in the obstruction to being in T&E(1).

BTW, you mentioned a way of computing SER without uniqueness. What's the command line? Is there also a way of computing it without uniqueness and without any of the Subset rules?
I don't use SER much. When I need it for comparisons, I use "java -cp SudokuExplainer.jar diuf.sudoku.test.serate --format=%r ....". I haven't seen any doc for de-activating part of the rules.
denis_berthier
2010 Supporter
 
Posts: 4189
Joined: 19 June 2007
Location: Paris

PreviousNext

Return to General