Ok, let's move on to the specific problem area described above, and call this "Guess Theory 102", and reduce the class size!
So all you P&P solvers can go home, nothing to see here!
"GT 102" considers the problem in the extreme cases - low-clue puzzles on large grids. A typical scenario is that we have 16x16 grid or bigger, so Samurai's are included, and we are looking to obtain a minimal puzzle form from some larger set of clues. We select a clue at random, test it for redundancy, keep it if we have to, or else remove it.
This process is repeated, and we notice that some clue tests are taking longer than others, and then we hit one that seemingly takes forever!
That, friends, is BGS (bad guess syndrome).
I wrote:Unless one is willing to spend some time looking at the full search-tree below (ie some form of look-ahead), there is little that can be done to avoid these cases.
The only effective "pathological case prevention" method is to apply "guess limits" - so the solver now has an extra return code to indicate that this limit was reached. One can then try an alternative NCS method, and most of the time this will work.
I should really have said "the only way (aside from developing new, smarter, NCS strategies) ..."
A variation on this approach that would probably handle most problems on 16 x 16 grids, say, is to use a parallel approach. Set an appropriate guess limit, and when the solver "fails", start parallel threads with each thread using a different NCS strategy. When the first one completes you simply kill the others, and thus get close to the best possible result. So, an
fsss2-like solver might start 4 threads, MR and TR, with and without TC recycling.
This at least offers a pain minimisation option while we (those of us who are still here
) try to come up with better strategies …