Tdoku grid_tools

Programs which generate, solve, and analyze Sudoku puzzles

Re: Tdoku grid_tools

Postby tdillon » Sat Jun 06, 2020 7:16 pm

Mathimagics wrote:
tdillon wrote:The key numbers are:
  • sequential grid enumeration: 2m / sec / core
  • random grid sampling: 130 / sec / core

For the LCT project, I invested in SSD drives. One 500Gb drive can hold the entire ED grid catalog, uncompressed.
On JILL (4-core i7700K) I have the catalog on a Samsung 860 EVO (1TB), and I can sample random grids at a rate of ~5400 grids / sec.
On JACK (16-core AMD Ryzon), it's a Samsung 970 EVO (1TB), but I only get 3300 random grids/sec, for some reason. Slower bus?

That's interesting. Thanks for the stats!

I think my approach can be sped up for sure, though at the expense of larger tables. It's main cost is that the solver must enumerate on average 1362 solutions per sample before finding the one it's looking for (a little more than the 1346 arising from the simple ratio of grid count to pattern count thanks to variation in the number of grids per pattern). Adding a scheme to make 4 additional binary choices would naively increase the table size by a factor of 16 while cutting the enumeration cost by a factor of a little less than 16 (again since the splits won't be balanced). But the table size can also be shrunk. Right now there are only 1059 distinct values for pattern solution counts, so that really only requires 11 bits instead of 16, and it's likely that this would drop to 10 or even 9 bits after the 16-way split just mentioned.

So optimistically I guess a 2.5 * 16 * 9 / 16 = 22.5GB table might yield a ~12-15x speedup. But 22.5GB is getting pretty big for most people to hold in memory.

Bottom line: for random sampling I don't think my approach can be made competitive with the SSD+catalog unless the SSD approach is IO-bound enough that it doesn't scale well with lots of cores.
tdillon
 
Posts: 66
Joined: 14 June 2019

Re: Tdoku grid_tools

Postby Mathimagics » Sun Jun 07, 2020 5:24 am

tdillon wrote:Bottom line: for random sampling I don't think my approach can be made competitive with the SSD+catalog unless the SSD approach is IO-bound enough that it doesn't scale well with lots of cores.


Hi Tom,

Raw random sampling times are not particularly significant (to me anyway), it all depends on the application. The random grid sampling function I used for the timing test was above is an IndexToGrid function, ie given a grid index, eg: gi = Rand64N(5472730538), retrieve the grid.

Since I have 255 separate files in this setup, this typically involves an fopen + fseek per grid. Clearly this is not optimal, and it would be better to do one of these:

  • (a) make the catalog a single file
  • (b) generate/sort the grid indices before fetching the grids

I can't easily do (a) right now, but I can test (b), and the sampling rate increased to over 9000 grids/sec.

IndexToGrid is a function of little use to me, however. GridToIndex is far more important for LCT Project. Given a grid, what is its index? Having a minlex-ordered catalog means binary search methods can be used.

Your scheme is novel, and is probably going to be very useful for many people.

Cheers
MM
User avatar
Mathimagics
2017 Supporter
 
Posts: 1926
Joined: 27 May 2015
Location: Canberra

Previous

Return to Software