The STRATEGIC LEVEL

Advanced methods and approaches for solving Sudoku puzzles

The STRATEGIC LEVEL

Postby denis_berthier » Sat Oct 04, 2008 2:10 am


The STRATEGIC LEVEL


In "the concept of a resolution rule" thread (http://forum.enjoysudoku.com/viewtopic.php?t=5600), I introduced a general framework, based on first order logic, in which I defined the general concept of a resolution rule, some properties a resolution rule can have (reversibility, composability, ...) and some properties a set of resolution rules can have (confluence, ...).

In several other threads (including the "xyt-chains", the "hidden-chains" and "the fully supersymmetric chains", http://forum.enjoysudoku.com/viewtopic.php?t=5591 threads), I defined several specific types of chains with the associated resolution rules (elimination theorems). I showed that some types of resolution rules subsume other types.

All this constitutes what I call the pure logic level.

But there is nothing in all this that tells the player: in such circumstance, look preferentially for such or such pattern.

This is the type of question that can be raised at the level I'll call the strategic level.
It is a level where different heuristic ways of using resolution rules can be defined. At this level, I speak of heuristics instead of logic because, contrary to the resolution rules at the logic level, the "strategic rules" at this level have no absolute validity. They should somehow try to take the players experience into account to guide the resolution process (each step of which is still justified by a resolution rule). The difficulty is that different players are likely to use very different heuristics.
For a given set of resolution rules, there can be many different strategies. I'll soon give examples for nrczt- theories.


The purpose of this thread is therefore to discuss the different ways one could define solving strategies.
(Notice that the word "strategy" here is not used in the sense of a "solving technique" or of a "resolution rule" which it is unfortunately sometimes given in the sudoku litterature.)

A strategy can be based on priorities put on the various rules; priorities themselves can be based on the types of the patterns, on their size (their length for chains) and on other objective criteria.

A strategy can also be based on ways of focusing the search on trying to eliminate some candidates (chosen according to criteria that remain to be defined, such as: in a bivalue rc-, rn- cn- or bn- cell, ...)

StrmCkr already mentioned the use of visual clues. There remains to define them.

As not much is known at this level AFAIK (I mean, much is known in an intuitive form by players, but not much is known in a well conceptualised form), it may be the case that this thread will not lead us very far. But who knows in advance?
denis_berthier
2010 Supporter
 
Posts: 4234
Joined: 19 June 2007
Location: Paris

Postby denis_berthier » Mon Oct 06, 2008 3:21 am


A few strategies for the nrczt- theories


The purpose of this post is not to tell any human player how to use the various resolution rules I've been considering: they can be used freely, in any order. And they can be mixed with any other rule you like.
The purpose of this post is to define a few formal strategies that can be used by either a human or a computer solver. This is very far from being exhaustive.
Such strategies will seldom be used in a strict manner by a human solver, which tends to be more opportunistic; but they may somehow guide his search. Contrary to a human solver, a computer solver needs to make systematic choices and will have to choose some predefined strategy (it may have several, but it can use only one at a time).


1) Strategies based on rules priorities

As chains are the main solving tool for hard puzzles, they can be considered as the backbone of a hierarchy.

As chains have both a type and a length, chain rule priorities can be based on both.

Type component:
Consider the following partial ordering of the various types of chains I've been considering:
Code: Select all
xy hxy :             xyz hxyz                : xyzt hxyzt     : nrczt
       : nrc                  : nrcz                          : nrczt
       :             xyt hxyt                                 : nrczt             
       : nrc                  : nrct                          : nrczt

As this partial ordering is based on subsumption relations, defining priorities that do not respect it would be equivalent to de-activating some of the rules. E.g. if nrczt is given the highest priority, then no other type of chain will be found.


Length component: shorter length gives higher (at least not lower) priority. Any other choice would be irrational (not as an opportunistic choice but) as a systematic preference.


Given these two basic components, there remain many ways of combining them into chain rule priorities. Indded, any full ordering compatible with the mathematical product of these two partial orderings is valid.

Other patterns (not chains): insert them in the chain hierarchy. There may be lots of ways to do so. In SudoRules default strategy, the insertion of Subset rules (Naked, Hidden and Super-Hidden) is done just before the xy-chains on the same number of cells.

example 1 (type preference): all the xy and hxy before all the nrc, before all the nrct, before all the nrczt (the length is not taken into account).

example 2 (length preference, which is the default ordering in SudoRules):
- all chains of any type of length n before all chains of any type of length n+1
- for a given length: xy before hxy before xyt before hxyt before xyzt before hxyzt before nrc before nrct before nrczt

example 3 (3D penalty):
Code: Select all
     xy before hxy before  xyt before  hxyt before xyzt before hxyzt and any 2D chain of length n before any 2D chain of length n+1
     nrc before nrct before nrczt and any 3D chain of length n before any 3D chain of length n+1
     for the 3D chains of length n, put them all after all the 2D cahins of length n+P (P is the 3D-penalty) and before all those of length n+p+1


example 4 (t penalty and z penalty) :
Code: Select all
     xy before hxy before nrc of the same length
     xyt before hxyt before nrct of the same length
     xyz before hxyz before nrcz of the same length
     xyzt before hxyzt before nrczt of the same length
     any chain of length n with the t-exetnesion between any chain of length n+tP without it and any chain of length n+tP+1 without it
     any chain of length n with the z-exetnesion between any chain of length n+zP without it and any chain of length n+zP+1 without it
     any chain of length n with the t- and z- exetensions between any chain of length n+ztP without it and any chain of length n+ztP+1 without it
             tP, zP and ztP are the z-, t- and ztP penalties, with ztP > tP and ztP > zP.



example 5 (t penalty, z penalty and 3D penalty) :
Combine examples 3 and 4.

All these strategies can be applied in SudoRules.

Notice that, if instead of considering a backbone built on the xy-to-nrczt family, one tried to define a backbone based on ALS and their Hidden counterparts (HALS or AHS) and their Super-Hidden counterparts (SHALS, not currently used in AICs, AFAIK), one should also introduce two parameters (length of the chain and size of the ALSs) and define how they should be combined.


2) Strategies based on focusing on some candidate
Basic principle: choose a candidate, say z, and try to eliminate it by all available means before you try something else
Notice that this can be combined with any of the previous strategies, restricted to patterns with z as a target.

The main difficulty here is, how to choose the focus on a candidate?

I've already given an example: focus on candidates that are bivalue in either of the rc rn cn or bn spaces (i.e. bivalue or bilocation).
Notice that,although it seems natural, this may not be a very good strategy. Sometimes it is easier to eliminate candidates that are not bivalue.

Merely focusing on candidates would be easy in SudoRules. But what's missing is heuristic rules saying on which candidate(s) it is interesting to focus.


3) Strategies based on symmetries
One can also try to exploit symmetries in the entries.
As this is very specific and I have no experience with this, I just mention it as a possibility.
denis_berthier
2010 Supporter
 
Posts: 4234
Joined: 19 June 2007
Location: Paris

Postby denis_berthier » Fri Oct 10, 2008 12:47 am


STRATEGIES BASED ON ORACLES


In this post, I'll consider various kinds of high level strategies. I call them high level because they deal with the choice of the resolution rules one will use in the resolution process, not in the specific ways these rules may be be used. Such high level strategies can be combined with any of the strategies mentioned in my previous posts.
These strategies are based on various oracles, i.e. on external knowledge about a puzzle that is given together with the puzzle.
There may be different sorts of such knowledge.

The most common sorts are the existence of a solution or the uniqueness of a solution. I mention them here because they are good examples of oracles, but I won't consider them further.
Just remember that resolution rules are valid for any set of entries, independently of the existence or the uniqueness of a solution.
When a puzzle is solved using resolution rules, both the existence and the uniqueness of the solution is automatically proven.
When uniqueness is guaranteed, one can use U-resolution rules, i.e. rules relying on the axiom of uniqueness of a solution.
Strangely enough, no one has ever proposed an E-resolution rule, i.e. a rule that would explicitly rely on the axiom of existence of a solution.
When a puzzle with no solution is dealt with, the fact that the entries are contradictory may generally be proven with resolution rules; but proving contradiction may be as hard as solving a puzzle (see the example I gave here: http://forum.enjoysudoku.com/viewtopic.php?t=5995&start=120).


Another kind of oracle that I've already considered is the knowledge that a set of candidates is a backdoor. I've used this knowledge abour EasterMoster to guide the search for a solution based on nrczt-chains plus T&(NRCZT) and using my notion of a strong T-backdoor. See here: http://forum.enjoysudoku.com/viewtopic.php?t=5600&postdays=0&postorder=asc&start=90


Here are now two other oracles one could consider: SER and being solvable by T&E.

SER:
If I know the SER of a puzzle is not greater than 9.3, then I know experimentally that it is very likely (but it is not certain) that I can find a solution with nrczt-whips. So I'll first look for a solution that uses only whips and doesn't rely on braids. This is consistent with the idea of preferring chains over nets.

Ordinary T&E:
If I know that a puzzle is solvable by T&E but its SER is greater than 9.3, then, using my equivalence theorem between T&E and nrczt-braids, I know that it is solvable by braids and it is unlikely to be solvable by whips. Unless I have some reason (such as SER less than 9.5 and special symmetries) to think that it may nevertheless be solvable by whips, I'll try directly a solution with braids.

Iterated T&E at depth 2:
If I know that a puzzle is not solvable by T&E, then I know it is exceptionally hard but it is neverthelss solvable by T&E at depth 2 (at least this is true for all known puzzles) and I know this is equivalent to being solvable by T&E(NRCZT-BRAIDS). I don't know whether it is solvable by T&E(NRCZT-WHIPS) but I'll first try with whips instead of braids, because I haven't yet found a puzzle that is not solvable by T&E(NRCZT-WHIPS) and, whichever result I find, it will teach me something about being solvable by T&E(NRCZT-WHIPS).
denis_berthier
2010 Supporter
 
Posts: 4234
Joined: 19 June 2007
Location: Paris

Postby PIsaacson » Fri Oct 24, 2008 1:18 am

Denis,

Regarding Oracles: I don't know how common this technique is for other sudoku solvers, but I always pre-check a puzzle using Dancing Links to validate that it (1) has a solution and (2) has only one unique solution. It only takes a few milliseconds to do this for any given puzzle (including Easter Monster and the like), so it seems worthwhile and it filters out "junk" such as puzzles posted with errors etc. All my other other solving methods depend upon this pre-knowledge.

My solver has a quasi-heuristic approach during batch processing puzzles in that it re-arranges the order of executing various methods depending upon the smoothed averages of their elimination efficiency (number of reductions/invocation of method) and cpu utilization efficiency (number of reductions/millisecond). This assumes that collections of puzzles have rather linear changes in degrees of difficulties or methods required. For most of the common puzzle collections that I have downloaded, this seems to be true. For those cases where there is an unexpected condition, an interrupt timer (signal SIGALRM) suspends unproductive methods and adjusts priorities in an attempt to expedite a solution. These two dispatching techniques, along with some directed search and exit criteria have really had a tremendously positive performance impact compared to my version that executes in strict order with exhaustive level by level and pass by pass analysis.

For extremely large collections, I have lately been pre-processing them and sorting them by difficulty using Glenn Fowler's sudoku ratings.
PIsaacson
 
Posts: 249
Joined: 02 July 2008

Postby denis_berthier » Fri Oct 24, 2008 4:31 am

Paul,
There's one thing unclear: how do you get "the smoothed averages of the elimination efficiency" of the different methods?
Can this be applied to solving an isolated puzzle?
denis_berthier
2010 Supporter
 
Posts: 4234
Joined: 19 June 2007
Location: Paris

Postby PIsaacson » Fri Oct 24, 2008 10:35 am

Denis,

Each solving method/technique, except for the lowest level naked/hidden singles & row/col/box interactions, maintains statistics for each "pass". A pass being defined as an invocation of a method which examines the current grid state and attempts to locate as many reductions as possible. Reductions are "queued" and applied at the exit of each method so the grid state remains static during the pass. This is to ensure that no potential elimination precludes it being used to qualify some other potential elimination during the pass. The counts and times are always accumlated for each of these methods (3d-medusa, distributed-disjoint-subsets, forcing chains, nrczt chains, als chains, pom-fishing, nishio-multi-level T&E etc...).

At the end of each pass, a running average of the last N (default 10) passes is computed/maintained for the 2 efficiency factors. The method scheduler uses the efficiency factors to help select the next method to run if the current puzzle is not solved. Run-time arguments allow me to exclude or prefer various methods and to establish the running average table depth.

That's probably a lot more detail than you wanted regarding how I obtain the smoothed averages.

Can this be applied to solving an isolated puzzle? It's always in effect, but the number of passes for higher level methods used by a single puzzle is usually less than my minimum running average depth of 3. This really only works well when I want to process a large batch of puzzles and I want to reduce the solving time without resorting to T&E or Dancing Links.
PIsaacson
 
Posts: 249
Joined: 02 July 2008

Postby denis_berthier » Fri Oct 24, 2008 10:08 pm

Paul,

What I was thinking is, if you have a large list of puzzles that you can consider as a general reference, can't you get default values for all these statistics?
denis_berthier
2010 Supporter
 
Posts: 4234
Joined: 19 June 2007
Location: Paris

Postby PIsaacson » Sat Oct 25, 2008 2:26 am

Denis,

I often run a large collection multiple times. After the first run's statistics are reported, I use them to set the initial method selection priority for subsequent runs. This helps, but not by as much once I started pre-sorting collections by degree-of-difficulty.

What's more interesting, at least to me, is working on the z-target candidate sub-scheduler which doles out the next cell(s)/target(s) for my various solving methods. For nrczt chains I use a sub-scheduler that issues nrc candidates in ascending order of bi-value/bi-location, tri-value/tri-location, quad-value/quad-location...

After the computer generates a solution that uses more than 50 nrczt chains, I have a back-tracker that finds the minimum number of chains needed to crack the puzzle. The minimal set is often less than 1/2 the size of the original set. This shows that I have much room for improvement in my current nrczt sub-scheduler.

I've got "cheat" scheduling that uses knowledge of "magic" or "backdoor" cells to force directed attacks, but I don't feel good about using it. Even though it sort of qualifies as your pre-cognitive Oracle, it's still cheating.

I guess the answer is to emulate an expert human sudoku player with some AI-like method? Or is that over-kill?
PIsaacson
 
Posts: 249
Joined: 02 July 2008

Postby denis_berthier » Sat Oct 25, 2008 3:29 am

PIsaacson wrote:What's more interesting, at least to me, is working on the z-target candidate sub-scheduler which doles out the next cell(s)/target(s) for my various solving methods. For nrczt chains I use a sub-scheduler that issues nrc candidates in ascending order of bi-value/bi-location, tri-value/tri-location, quad-value/quad-location...

So, would you say that, statistically, giving priority to whips with bivalue/bilocation targets (among whips that have the same length) is an interesting strategy?
I never tried such strategy, but it wouldn't be too hard to implement it in SudoRules.

PIsaacson wrote:I guess the answer is to emulate an expert human sudoku player with some AI-like method? Or is that over-kill?

The problem is, who knows what an expert human player really does? If we could formalise such strategic knowledge, implementing it would be easy.
denis_berthier
2010 Supporter
 
Posts: 4234
Joined: 19 June 2007
Location: Paris

Postby PIsaacson » Sat Oct 25, 2008 4:38 am

Denis,

An interesting strategy? Absolutely! Focusing on bi-value bi-location cells pays off if you exit after performing a reduction and run the hidden/naked singles+ logic. Solving any single cell, even the non-magic or non-backdoor ones, helps to rapidly converge on the solution. Cheating and focusing on known backdoors REALLY pays off, regardless of whether or not they are b/b cells.

There are expert systems that are "trained" with the guidance of a human. In my prior work with one such system, it "adopted" the trainer's techniques without us (the development team) ever writing a single rule. It was eerie how closely the system emulated the expert's decision making process during various simulations. I wish I had a copy now...
PIsaacson
 
Posts: 249
Joined: 02 July 2008

Postby denis_berthier » Sat Oct 25, 2008 5:30 am

PIsaacson wrote:An interesting strategy? Absolutely! Focusing on bi-value bi-location cells pays off if you exit after performing a reduction and run the hidden/naked singles+ logic.

NS and HS will be applied automatically after a candidate in a bivalue/bilocation cell has been eliminated.
I haven't implemented such focusing because it obliges me to duplicate the rules: one with the bivalue/bilocation condition and one without it, with lower priority. But that's an easy copy/paste. I'll put it on my todo list. As such focusing is not based on a priori knowledge about the puzzle, I don't consider it as using an oracle and it is not cheating.

PIsaacson wrote:There are expert systems that are "trained" with the guidance of a human. In my prior work with one such system, it "adopted" the trainer's techniques without us (the development team) ever writing a single rule. It was eerie how closely the system emulated the expert's decision making process during various simulations.

Well, learning from examples in expert systems works when the problems are not too complex. As Sudoku(n) is NP-complete and Sudoku(9) has already all the caracteristics of a very hard problem, I don't believe learning new rules would be possible. Adjusting the weights (or priorities) of existing rules, as you are doing, is much easier.
denis_berthier
2010 Supporter
 
Posts: 4234
Joined: 19 June 2007
Location: Paris

Postby PIsaacson » Mon Oct 27, 2008 10:40 am

Code: Select all
Test 1 statistics

Sudoku final statistics:                      updates/call    total cpu time      avg cpu time/call      avg cpu time/update

  tot_pinnings     updates/calls 81338/17299        4.7019      99.6859 msec       0.0058 msec/call       0.0012 msec/update
  tot_restrictions updates/calls 18965/7335         2.5855      46.8099 msec       0.0064 msec/call       0.0025 msec/update
  tot_subsets      updates/calls 24646/4623         5.3312     342.9022 msec       0.0742 msec/call       0.0139 msec/update
  tot_fishing      updates/calls 3925/2686          1.4613    8360.0824 msec       3.1125 msec/call       2.1300 msec/update
  tot_coloring     updates/calls 43732/1738        25.1623    6020.1654 msec       3.4638 msec/call       0.1377 msec/update
  tot_alschains    updates/calls 178/214            0.8318    1678.3588 msec       7.8428 msec/call       9.4290 msec/update
  tot_dds          updates/calls 1/157              0.0064    1074.0901 msec       6.8413 msec/call    1074.0901 msec/update
  tot_nrczt        updates/calls 1245/156           7.9808  594123.5515 msec    3808.4843 msec/call     477.2077 msec/update
  tot_nishio       updates/calls 60/29              2.0690     263.2740 msec       9.0784 msec/call       4.3879 msec/update

Runtime command: sudoku -g -Oprsfcadnt ../sudoku/top1465.txt
sudoku used solving order prsfcadnt -N18 -A5 -D5

Test 2 statistics

Sudoku final statistics:                      updates/call    total cpu time      avg cpu time/call      avg cpu time/update

  tot_pinnings     updates/calls 81296/15707        5.1758      91.9614 msec       0.0059 msec/call       0.0011 msec/update
  tot_restrictions updates/calls 18596/6257         2.9720      39.9713 msec       0.0064 msec/call       0.0021 msec/update
  tot_subsets      updates/calls 23367/3673         6.3618     275.4001 msec       0.0750 msec/call       0.0118 msec/update
  tot_coloring     updates/calls 50167/2000        25.0835    6713.9265 msec       3.3570 msec/call       0.1338 msec/update
  tot_fishing      updates/calls 50/237             0.2110     777.6829 msec       3.2814 msec/call      15.5537 msec/update
  tot_alschains    updates/calls 178/214            0.8318    1684.4686 msec       7.8713 msec/call       9.4633 msec/update
  tot_dds          updates/calls 1/157              0.0064    1080.1269 msec       6.8798 msec/call    1080.1269 msec/update
  tot_nrczt        updates/calls 1245/156           7.9808  595585.5043 msec    3817.8558 msec/call     478.3819 msec/update
  tot_nishio       updates/calls 60/29              2.0690     264.0998 msec       9.1069 msec/call       4.4017 msec/update

Runtime Command: sudoku -bg ../sudoku/top1465.txt
sudoku used solving order prscfadnt -N18 -A5 -D5


Denis,

The above 2 statistics reports were generated at the end of 2 test runs on the Top1465.

Test 1 was invoked with a specified solving order "prsfcadnt" which overrides the heuristic scheduler. It's my current default initial ordering.
Test 2 was invoked without specifying a solving order, so the heuristic scheduler got a chance to run.

The scheduler "decided" to swap the order of execution for the coloring/fishing methods. It "discovered" that coloring was more productive both for the updates/call and for the avg-cpu-time/update. Code obviously can't "think", but this shows that applying the heuristics worked because it reduced the overall run time by a whopping 8 seconds from the total-cpu-time for fishing! The 1.3% improvement is only significant because it shows that the algorithm works, and I already had a fairly good default based on many prior test runs. However, if I now make an improvement in one of the methods or add a new method, I can trust that the scheduler will optimize the method selection on large collections.
PIsaacson
 
Posts: 249
Joined: 02 July 2008


Return to Advanced solving techniques