Red Ed wrote:I wrote:I've not tested this yet, so it might turn out to be a rubbish idea, but I thought I'd better write the theory down or I'd never get round to it.
Preliminary investigation suggests the idea will work in the tails of the distribution.
The benefit appears, sadly, to be rather small. The following applies to finding 21-clue minimals within 28-clue subgrids.
Let the "gnarliness" of a 28-clue subgrid be the sum, over units (boxes, rows and columns), of occupancy^2. Say gnarliness<297 is "very smooth", else <305 is "smooth", else <313 is "ordinary"; else <323 is "rough", else is "very rough". That's five classes. We can find the class probabilities to within a reasonable degree of accuracy by random trials (ignore solution grids, just generate 28-cell patterns): they are 16.91%, 20.33%, 22.25%, 21.48% and 19.03% respectively ... or near enough.
Now spend a bit of time running the (28,21)-subsets method for the 28-clue subgrid restricted to a single class. I left it long enough for the counts to hit the hundreds in order to get a reasonably realistic view of critical parameters like Rate and, particularly, Sigma. The results for each class are tabulated below.
- Code: Select all
+----------------------+------+-----------+-------+------------+--------+
| Class Name (Prob) | Time | Trials | Count | E(nr/grid) | E(RE) |
+----------------------+------+-----------+-------+------------+--------+
| VERY SMOOTH (0.1691) | 600 | 6356547 | 2528 | 4.5802e+09 | 5.22% |
| SMOOTH (0.2033) | 600 | 8369783 | 1282 | 1.7640e+09 | 6.45% |
| ORDINARY (0.2225) | 600 | 11622440 | 698 | 6.9165e+08 | 8.00% |
| ROUGH (0.2148) | 1200 | 26806669 | 564 | 2.4231e+08 | 7.50% |
| VERY ROUGH (0.1903) | 1800 | 42531861 | 259 | 7.0131e+07 | 12.20% |
+----------------------+------+-----------+-------+------------+--------+
From those results, one can work out the amount of time one would normally spend in each class (it will be more in the smoother classes because they yield more puzzles) and the optimum time one
should spend in each class. Skipping the boring details of the calculation, we end up at:
- Code: Select all
+----------------------+------------+------------+
| Class Name (Prob) | Usual Time | Opt'm Time |
+----------------------+------------+------------+
| VERY SMOOTH (0.1691) | 30.97% | 48.01% |
| SMOOTH (0.2033) | 23.52% | 27.47% |
| ORDINARY (0.2225) | 16.94% | 14.62% |
| ROUGH (0.2148) | 14.69% | 6.56% |
| VERY ROUGH (0.1903) | 13.89% | 3.35% |
+----------------------+------------+------------+
Finally, from the timings and experimental data, one can work out the expected variance of the E(nr/grid) estimate in the "usual" and "optimum" cases:
- Code: Select all
+------------+
| Conclusion |
+------------+--------------------------+
| Usual 1-second variance: 5.2275e+18 |
| Optimum 1-second variance: 4.2551e+18 |
| Speed-up factor: 1.2285 |
+---------------------------------------+
We see, in conclusion, that class sampling gives a speed-up of about 23% compared to the usual version of the (28,21)-subsets method.
Oh well, it was an interesting diversion