The concept of a RESOLUTION RULE and its applications

Advanced methods and approaches for solving Sudoku puzzles

Postby Ruud » Thu Oct 18, 2007 5:56 pm

Denis Berthier wrote:But forcing chains are not chains in my sense. They correspond to no predefined pattern.

How about a little experiment:
  • Take a scrambled version of Easter Monster.
  • Use SE to solve it and produce a solver log.
  • Write a detailed definition of each forcing chain used by SE, describing all the relative positions of the cells used, the candidates in these cells and the interactions (links if you prefer) between them.
  • Solve Easter Monster using these predefined patterns.
Ruud
 
Posts: 664
Joined: 28 October 2005

Re: The concept of a RESOLUTION RULE and its applications

Postby Red Ed » Thu Oct 18, 2007 6:19 pm

gsf wrote:tricky indeed, but its about time for some fun
right off there will be a tug of war between complexity of an empty grid (no clues, all candidates possible in all cells) vs. a grid with some clues
e.g., the average number of candidates/cell for |sudogen0.txt|=10,000, not counting clue cells, is 2.7
but say we ignore that
OK, ignored:D . No, seriously: I wanted my complexity measure to be as simple as possible, so I chose to make it grid-independent. If that restricts its usefulness, so be it: someone else can come up with a more useful complexity measure. Please note that I am not attempting to predict run-times or anything here.

gsf wrote:then there's the issue of bitmask representations that can lead to parallel operations (which may lead to operation weights)
e.g., with candidates represented as 9 bits, one wouldn't scan the grid 9 times, once for each candidate
one would scan once and check each cell for one candidate, and then determine the candidate value
Ah, for this I need to refer you to Denis' opening post:
    One important notion is that of a resolution technique being associated to or being the implementation of a resolution rule.
    A given resolution rule can, in practice, be applied in various ways. Said otherwise, different resolution techniques can sometimes be attached to a single resolution rule
    This distinction is important. Forgetting it may lead to the absurd assimilation of an algorithm and a logical formula - two notions that do not belong to the same universe of discourse (CS vs logic).
I'm tackling resolution rules, not resolution techniques, so implementation issues don't interest me for this particular complexity measure. My measure's in the logic "universe", yours is in the CS one. I've always thought you came from a parallel universe, so this makes perfect sense to me:D

gsf wrote:so naked singles could take 81 candidate mask(r,c) ops, and then a check for 1 bit set
(the scan would have to differentiate between solved and unsolved cells too)

for r,c if unsolved(r,c) and !(mask(r,c)&(mask(r,c)-1)) then value(mask(r,c)) is a naked single

this looks like O(9^2) or strawman 2
That seems like a perfectly reasonable measure for a particular implementation of a rule.

gsf wrote:so I guess I'm saying that a complexity measure should take into account that not all operations are equal
or maybe that the predicates chosen may have a non-trivial effect on how complexity is measured
e.g., measures limited to candidate(n,r,c) might end up doing loops inside out in a less efficient manner
Yes, when measuring the complexity of an implementation, but No for just the resolution rule in its abstract form. The exam question concerned rules, not implementations.
Red Ed
 
Posts: 633
Joined: 06 June 2005

Postby Red Ed » Thu Oct 18, 2007 6:33 pm

Picking bits and pieces out of Denis' response to me ...

denis_berthier wrote:Let me take a more general example. Is a pair of conjugate candidates more or less complex than a bivalue cell? Again, from a purely logical POV, they should have the same complexity (conjugate is bivalue in either of the rn-, cn- or bn- spaces). From a player's POV, that may be the topic for unending debates.
When talking about a resolution rule (abstract logic), I think it's fair to ignore human concerns. Since you were so careful to separate rule from technique (logic from CS), I've taken the liberty of generating an abstract complexity measure for "logic" part (rules). I accept that the "CS" part (techniques) probably deserves a measure that reflects the real human or computational difficulty of applying a technique.

Another problem arises when we try to do such computations for more complex patterns, e.g. for chains (and that led me to give up the idea of putting such things in my book): it becomes very difficult to evaluate the number of possibilities for each cell. Worst case analysis would give an unrealistic complexity measure and mean case analysis is nearly unfeasible. I think, even if it doesn't faithfully reflect complexity for a human, any progress on this would be valuable.
Yes, I agree, that looks very difficult. I was trying to define a complexity measure based on the FOL description of the rule, but perhaps this is doomed to failure when even such natural things as a parity calculation have exponential complexity in that setting.

Finally, I'm not sure a notion of complexity can be attached to a resolution rule itself. It may have to be attached separately to each of the resolution techniques implementing it, and it will depend on the representations used for this implementation.
I don't want to even begin to think about that! - too many ways of implementing a rule, e.g. see gsf's remarks above.
Red Ed
 
Posts: 633
Joined: 06 June 2005

Re: The concept of a RESOLUTION RULE and its applications

Postby gsf » Thu Oct 18, 2007 6:38 pm

Red Ed wrote:My measure's in the logic "universe", yours is in the CS one. I've always thought you came from a parallel universe, so this makes perfect sense to me:D

back in the day CS was a slur -- I'm really from the "engineering putting theory into practice" universe
The exam question concerned rules, not implementations.

fair enough
so the first steps will be setting up the predicate(s) in scope
is candidate(n,r,c) sufficient?
gsf
2014 Supporter
 
Posts: 7306
Joined: 21 September 2005
Location: NJ USA

Re: The concept of a RESOLUTION RULE and its applications

Postby Red Ed » Thu Oct 18, 2007 7:02 pm

gsf wrote:so the first steps will be setting up the predicate(s) in scope
is candidate(n,r,c) sufficient?
It's sufficient according to Denis' definition of resolution rule in this thread. However, since noticing the thing about exponentiality of n-input XOR in FOL (see link above), I don't think that complexity measures based on counting FOL predicates are going to go anywhere. I hereby abandon part "b" (the predicate-counting part) of my complexity measure.

I still think that part "a" (number of possible realisations of the rule, essentially) might have some legs. I proposed the same thing months ago on the Eureka forum (here). At the time, I thought that nets & chains made the idea impractical, but Denis may have given the solution: just concentrate on scoring the resolution rule, not the more general family of techniques. Not sure yet; need to keep thinking.
Red Ed
 
Posts: 633
Joined: 06 June 2005

Postby denis_berthier » Fri Oct 19, 2007 6:41 am

Ruud wrote:How about a little experiment:
  • Take a scrambled version of Easter Monster.
  • Use SE to solve it and produce a solver log.
  • Write a detailed definition of each forcing chain used by SE, describing all the relative positions of the cells used, the candidates in these cells and the interactions (links if you prefer) between them.
  • Solve Easter Monster using these predefined patterns.

Why should I do that, even if I had time?
I'm not particularly interested in solving EM.
The idea that has guided me while I wrote my book (and after) is finding sufficiently simple and general resolution rules that can be applied by human players. The result is a set of rules of various complexities such that the nrczt chains of length <= 7 can solve almost any puzzle, without having any subset extensions (hinges, ALS) such as in AICs.
If someone comes out with a completely new general type of chain with nice properties, that can solve EM, I'll be the first to applaud.
But if EM or a few exceptionnal puzzles need chains or nets so complex that their resolution paths would be unreadable, I don't mind using T&E in these cases - or even not getting a solution. My life doesn't hang on the solution of any puzzle.
Instead of what you are suggesting, if I had time, I'd continue analysing nrczt chains for possible extensions. Until now, although others have also tried (e.g. Alain), we've not found really interesting extensions.
Call it aestethics or subjective preferences. Research is often guided by such personal choices.

Finally, there's a question you didn't answer - still motivated by my T&E theorem. When you check a puzzle for uniqueness, can you avoid using recursive T&E?
BTW, if you (or others here) don't like the name T&E for the recursive search procedure defined at the start of in this thread, can you suggest any better name?
denis_berthier
2010 Supporter
 
Posts: 4235
Joined: 19 June 2007
Location: Paris

Postby denis_berthier » Fri Oct 19, 2007 7:14 am

Red Ed, gsf
This may be only marginally relevant to the discussion about complexity.
Roughly speaking, in an inference engine, the "program" is made of rules, which are logical formulæ in the condition-action form (with no "or" in the action part).
In an inference engine, there are generally several predefined strategies for applying the rules. Usually, there are 2, corresponding roughly to depth-first and breadth-first search.
The default strategy is generally DFS (because BFS is much to greedy for memory).
In CLIPS, there is also a third strategy, based on the logical complexities of the rules, but I've never heard of any "real life" case in which this strategy has been used. I should ask the users group about that.

The problem with defining a purely logical complexity measure is that I can't see how it could be made invariant through logical equivalence. It is often enough to define auxiliary predicates, as I do in my book whenever it is useful, and the most complex formulæ look simple.
Looks like we're brought to a kind of implementation problem: a logical formula using only the primary predicates vs its different re-writings with auxiliary predicates.

About the primary predicates I use in Sudoku: not only candidate(n, r, c) but also value(n, r, c) for a decided cell and correspondence(r, c, b, s) for the correspondence between the 2 natural coordinate systems on the grid (and, of course equality).
denis_berthier
2010 Supporter
 
Posts: 4235
Joined: 19 June 2007
Location: Paris

Postby Red Ed » Fri Oct 19, 2007 6:20 pm

Well, Denis, you've obviously read-up a lot more on this subject than I have:D - so I am not about to disagree. Sure, it seems to be a very hard thing to define a measure that meets the intuitive notion of "complexity". So I'm not even going to try.

I should avoid using the word "complexity" for my part "a" (number of possible realisations of the rule, essentially). I'll call it cover dimension (cover in the sense that all possible realisations are catered for; dimension in the sense that I take log(base 9) of the total).

Could you show me a collection of say 20 or so resolution rules? Then I'll compute the cover dimensions and we can see whether or not the relative rankings feel right.
Red Ed
 
Posts: 633
Joined: 06 June 2005

Postby Ruud » Fri Oct 19, 2007 7:44 pm

Denis Berthier wrote:When you check a puzzle for uniqueness, can you avoid using recursive T&E?
BTW, if you (or others here) don't like the name T&E for the recursive search procedure defined at the start of in this thread, can you suggest any better name?

When my program checks a puzzle for uniqueness, it uses Knuth's Dancing Links technique. This is a fast depth-first, brute force backtracking method which is based on his Algorithm-X.

Since I use it as a verification and not as a part of the solving process, I do not see the relevance of your question. Because it is so fast, I don't want to avoid it.

The recursive T&E you're describing sounds more like backtracking.

In my perception, the unforgivable techniques are:

brute force/backtracking Try every possibility.
guessing: Make random placements, eliminate when it leads to an error, claim victory if it leads to the solution, undo when no definitive result.
T&E: Make random placements, eliminate when it leads to an error
bifurcation: Same as guessing or T&E, but limited to bivalue/bilocal candidates
recursive bifurcation: Same as backtracking, but also limited to bivalue/bilocal candidates.

I realize this list has no mutually exclusive elements, but this is how these terms are commonly used in Sudoku-world.

I've experimented with taking measurements in the DLX process, to see if it could shed some light on the difficulty level. These measurements were done:

Number of non-trivial constraints in the direct path towards the solution. The maximum I've encountered for regular Sudoku is 6. There's no correlation between this measurement and the difficulty, other than a non-trivial count of 0 means that the puzzle can be solved with singles only.

Nodes investigated. Again, no correlation with difficulty, but I found that a standard Sudoku with a unique solution requires no more than 2000 nodes to be checked. The average is about 200 for non-trivial sudokus. To speed up my puzzle generator, I'm using a limit of 1500. For Sudoku-X, these values are 10 times higher.
Ruud
 
Posts: 664
Joined: 28 October 2005

Postby ravel » Fri Oct 19, 2007 7:54 pm

denis_berthier wrote:Also, my chains are defined as patterns of cells and links ...
Denis, the chains you ar talking of are a bit strange, hard to spot, rather effective and invented by you. So i suggest to refer to them as Berthier chains.
ravel
 
Posts: 998
Joined: 21 February 2006

Postby denis_berthier » Sat Oct 20, 2007 6:31 am

Red Ed wrote:I should avoid using the word "complexity" for my part "a" (number of possible realisations of the rule, essentially). I'll call it cover dimension (cover in the sense that all possible realisations are catered for; dimension in the sense that I take log(base 9) of the total).
Could you show me a collection of say 20 or so resolution rules? Then I'll compute the cover dimensions and we can see whether or not the relative rankings feel right.

What exactly do you need for this?
For better readability of my book, I've written very few rules using only the primary predicates; e.g. an xy4-chain is written:
rc/- {n1 n2} - {n2 n3} - {n3 n4} - {n4 n1} => rc≠ n1
where:
the "rc/- " prefix indicates that the rule must be interpreted in rc-space,
(r, c) always names a target cell,
{n1 n2} stands for bivalue,
"-" stands for rc-linked.

hxy-rn and hxy-cn chains are obtained by permuting cand n (resp. (r and n).
Writing explicitly the full rules without such auxiliary predicates would be rather cumbersome.
denis_berthier
2010 Supporter
 
Posts: 4235
Joined: 19 June 2007
Location: Paris

Postby denis_berthier » Sat Oct 20, 2007 6:54 am

ravel wrote:
denis_berthier wrote:Also, my chains are defined as patterns of cells and links ...
Denis, the chains you ar talking of are a bit strange, hard to spot, rather effective and invented by you. So i suggest to refer to them as Berthier chains.

You say "my" chains are hard to spot. How long have you been trying (and how long have you been using other patterns?) Anything new seems harder than what we already know. But is an nrct4 chain objectively harder to spot than a mutant finned, sashimi or roasted (I don't know your cooking preferences) Jellyfish?
And do you think that AICs with ALSs are easier to spot?

Namig a rule or a pattern from the name of its inventor is generally not a good idea because it doesn't convey much information. Moreover, we need names for different types of chains.
If you need a general name for all "my" chains, I'd rather suggest "factual chains", because they are built from basic facts, they are not chains of inferences.
It should not be forgotten that my framework leads to considering chains as any other patterns.
As for the specific names I've given to "my" specific types of chains, it seems they have already been adopted, so I don't see any reason for changing them now, unless we find really better descriptive names.

For 2D chains:
hxy-rn and hxy-cn (the counterparts of xy-chains in the rn and cn spaces)
xyt (and their hxyt-rn and hxyt-cn counterparts)
xyz (and their hxyz-rn and hxyz-cn counterparts)
xyzt (and their hxyzt-rn and hxyzt-cn counterparts)

For 3D-chains:
nrc-chains (which are more or less only a different view of "basic" NLs or AICs)
nrct-chains
nrcz-chains
nrczt-chains

All these chains form a hierarchy of complexity (not in a formal sense of this word - see the other posts).
denis_berthier
2010 Supporter
 
Posts: 4235
Joined: 19 June 2007
Location: Paris

Postby Red Ed » Sat Oct 20, 2007 7:03 am

denis_berthier wrote:
Red Ed wrote:Could you show me a collection of say 20 or so resolution rules? Then I'll compute the cover dimensions and we can see whether or not the relative rankings feel right.

What exactly do you need for this?

Just whatever formulation of those 20-or-so rules is easy for you to produce, Denis. I'll ask if/when I have problems understanding them. Thanks.
Red Ed
 
Posts: 633
Joined: 06 June 2005

Postby denis_berthier » Sat Oct 20, 2007 7:22 am

Ruud wrote:When my program checks a puzzle for uniqueness, it uses Knuth's Dancing Links technique. This is a fast depth-first, brute force backtracking method which is based on his Algorithm-X.
Since I use it as a verification and not as a part of the solving process, I do not see the relevance of your question. Because it is so fast, I don't want to avoid it.

Not everybody can (or want to) check uniqueness this way before solving a puzzle. So, saying it is not part of the solving process may be debated.
My question was relevant in the context of the discussion of T&E: what does it mean for a human player to use a non recursive form of T&E?

Ruud wrote:The recursive T&E you're describing sounds more like backtracking.
In my perception, the unforgivable techniques are:
brute force/backtracking Try every possibility.
guessing: Make random placements, eliminate when it leads to an error, claim victory if it leads to the solution, undo when no definitive result.
T&E: Make random placements, eliminate when it leads to an error
bifurcation: Same as guessing or T&E, but limited to bivalue/bilocal candidates
recursive bifurcation: Same as backtracking, but also limited to bivalue/bilocal candidates.

The recursive T&E I'm describing is very different from your definition of backtracking:
- firstly, it is a whole family of techniques (depending on a set S of resolution rules);
- secondly, it doesn't try every possibility; it uses the knowledge in S to prune the search;
- thirdly, it is exactly the recursive form of your T&E (repeated eliminations lead to singles and to assertions).

Interestingly, you consider all the techniques you're mentioning as unforgivable. I guess you'd therefore consider my T&E also as unforgivable, a point on which I'd agree.
But this brings us back to my initial question: if one uses one of the unforgivable techniques, why stop there and not use its recursive form?
denis_berthier
2010 Supporter
 
Posts: 4235
Joined: 19 June 2007
Location: Paris

Postby denis_berthier » Sat Oct 20, 2007 7:53 am

Red Ed wrote:
denis_berthier wrote:
Red Ed wrote:Could you show me a collection of say 20 or so resolution rules? Then I'll compute the cover dimensions and we can see whether or not the relative rankings feel right.

What exactly do you need for this?

Just whatever formulation of those 20-or-so rules is easy for you to produce, Denis. I'll ask if/when I have problems understanding them. Thanks.

OK. Rules are grouped into families.
Can we start with the simplest family: xy-chains? I think this can already shed some light on the difficulty of the task.
We have the general pattern:
rc/- {1 2} - {2 3} - {3 4} ………… {n-1 n} - {n 1} => rc≠1
(with the conventions of my previous post).
denis_berthier
2010 Supporter
 
Posts: 4235
Joined: 19 June 2007
Location: Paris

PreviousNext

Return to Advanced solving techniques