The concept of a RESOLUTION RULE and its applications

Advanced methods and approaches for solving Sudoku puzzles

The concept of a RESOLUTION RULE and its applications

Postby denis_berthier » Sat Aug 18, 2007 6:57 am


THE CONCEPT OF A RESOLUTION RULE AND ITS APPLICATIONS


I think that the first post of a thread should somehow define what its creator wants and does not want to discuss.

The purpose of this thread is to discuss applications of the concept of a resolution rule to longstanding (real or factitious) problems (e.g. T&E).
More generally, it is also to apply the new global conceptual framework introduced in my book, The Hidden Logic of Sudoku - HLS in the sequel), a framework based on first order logic.
A large excerpt of HLS has been available on my Web pages since the end of May. You may choose to ignore it, but you are supposed to have read it if you participate in this thread.


The basic definitions I'm repeating briefly here have first appeared in HLS and on several occasions on this forum.


RESOLUTION RULES AND RESOLUTION PATHS

Definition: A resolution rule is a formula of first order logic (or predicate calculus), in the language of Sudoku, that satisfies the two conditions:
- it is a logical consequence of the Sudoku axioms,
- it is written in the condition => action form, where "action" can only be the assertion of a value or the negation of a candidate in some cells associated to the pattern (the target cells).
The condition is also called the pattern of the rule.


The language of Sudoku is built on a very limited number of primary predicates and associated atomic formulæ:
n1 = n2
r1 = r2
c1 = c2
b1 = b2
value(n, r, c)
candidate(n, r, c)
where n, r, c,… are variables (and = is the usual equality sign)
Auxiliary predicates can of course be defined. Formally, they are just formulæ with free variables.
For instance, share-a-unit(r1, c1, b1, r2, c2, b2) is defined as: r1 = r2 or c1 = c2 or b& = b2


A resolution rule is proven once and for all, and the way it has been proven, by valid mathematical methods, has no impact of any kind on its logical validity.
Being logically valid doesn't prevent a rule from being more or less easy to apply, but this has nothing to do with the way it has been proven. The problems
- of proving a rule once and for all (generally these proofs are very straightforward)
- and of spotting the occurences (instantiations) of its defining pattern on a real grid
are totally independent.


Definition: A resolution path is a sequence of occurences of resolution rules with their conclusions - with no additional ad hoc pieces of reasoning. A resolution path is thus the mathematical proof of the solution.
Unfortunately, examples given in forums very rarely satisfy this definition, although they often could.
This is mainly due to:
- ambiguous examples with incomplete resolution paths and inadapted notations,
- a generalised confusion, on this forum and others, between the description of a purely factual situation (e.g. the existence of a general link in my sense, i.e. a unit being shared between two cells, or of a c-link or conjugacy link) and the way this can be used in an inference step. In this regard, the notions of strong and weak links are certainly the most confusing ones that can have been introduced - leading e.g. to hyper-realistic debates on whether a strong link is also a weak link.



EXAMPLE 1: CHAINS

Most advanced resolution rules are based on chains of various kinds.

Definition: A (2D) chain is a sequence (i.e. a strictly, linearly ordered list) of cells such that each cell in the chain (but the first) shares a unit with (synonymous: is linked to) the previous one and such that the first and last are different.
As a consequence, a chain has no global loop. It may have internal loops, although I have proven that this is useless for some types of chains (xy- or c- chains).
We shall be interested in specific types of chains, to which resolution rules can be associated. Specific chains will satisfy addditional conditions, but they will always first be chains in the above, quasi "physical" sense.

Generally, the condition (or the pattern) defining a specific kind of chain and the associated rule are written in a graphico-logical form, i.e. not explicitly as a logical formula but in a (much more compact) form that can be translated straightforwardly into a logical formula (the details of this translation are given in HLS). The graphico-logical form is considered as a shorthand for the logical formula.
Example:
xy4-pattern: {1 2} - {2 3} - {3 4} - {4 1}
xy4- rule: * {1 2} - {2 3} - {3 4} - {4 1}* => TC≠1
where:
- curly brackets describe the content of cells,
- numbers are shorthands for variable names (1 for x1, 2 for x2, …),
- "-" denotes a link between cells,
- the starred cells are those that MUST be linked to a target cell; the two endpoints are always starred (for xy-chains, only the two endpoints are starred; but additional cells can be starred in xyzt-chains),
- TC is any target cell, i.e. any cell having a link (possibly a different one) with both endpoints of the chain.
Notice that the conclusion of the rules is almost always "TC≠1", a symbolic notation for negating candidate x1 in celll TC (only very few rules assert values).

A target cell of a chain NEVER belongs to the chain. This is very important for two reasons: 1) all the chain rules that I have introduced in HLS can thus be based on HOMOGENEOUS patterns; 2) it allows a chain to have several targets.
Here homogeneous means that the pattern is a sequence of similar bricks. In particular, a target cell of an xy-chain doesn't have to be linked to its endpoints by xy-links. Homogeneity couldn't be granted if the target cell had to be included in the pattern. This applies to chains of all types, not only xy.
This also allows to discard irrelevant distinctions depending on the type of links a target cell has with the chain (such as Nice Loops being "continuous" or "discontinuous").
I have proven in HLS that all xy-loops or Nice Loops (not considering subset links) can be reduced to such open chains (this doesn't forbid using loops, but you should be aware that they do not lead to more eliminations).



EXAMPLE 2: GENERALISED SYMMETRIES AND HIDDEN CHAINS

A general theorem proven in HLS is the following:
Theorem (informal version): If a resolution-rule is block-free, i.e. doesn't mention blocks, then the two formulæ obtained by permuting the words "column" and "number" (resp. "row" and "number") are resolution rules.
This abstract generalised symmetry property led me to introduce the rn- and cn- spaces (or representations).

Theorem: If a formula is block-free, then it is valid for Sudoku if and only if it is valid for Latin Squares.
Informally speaking, all and only the rules of Latin Squares can be applied in the rn- and cn- spaces.
No one should complain about having no blocks in the rn- and cn- spaces. This is not my choice but a consequence of the Sudoku axioms.

Many resolution rules can thus be transposed to the rn- and cn- spaces and I have designed an extended Sudoku Board in order to facilitate their use - so that this abstract symmetry becomes very concrete for the player.

This applies to the above example: xy-chains give rise to hxy-rn- and hxy-cn- chains.
hxy-chains are special case of AICs, but cases that can be looked for (in their 2D "base space" - see below) as if they were simple xy-chains,
Moreover the new types of chains introduced in HLS (xyt- and xyzt-) also have transposed versions, hxyt- or hxyzt- chains, which are also completely new types of chains.


A 2D chain has a "base space": rc, rn or cn, i.e. the cells and the links in the defining pattern must be interpreted as rc-, rn- or cn- cells and links
In rc- space, there are three types of links: row, column and block.
In rn- space, there are two types of links: row and number.
In cn- space, there are two types of links: column and number.

Since the conclusion of a chain rule is always the same (=> TC≠1), it can be omitted from the graphico-logical representation and included in the " logically valid" sign (that I'll write as "/-" because of problems on this forum with special characters). We thus get the graphico-logical representations for the xy4-, the hxy4-rn and the hxy4-cn resolution rules:

rc /- {1 2}* - {2 3} - {3 4} - {4 1}*
rn/- {1 2}* - {2 3} - {3 4} - {4 1}*
cn/- {1 2}* - {2 3} - {3 4} - {4 1}*



EXAMPLE 3: FULLY SUPERSYMMETRIC CHAINS (3D chains)

The above examples of resolution rules can be generalised to "3D chains".
Recently I've introduced the fully supersymmetric ("3D") versions of the above chains.
This also shows that the two apparently conflicting views of chains (chains of cells as in 2D chains and chains of candidates as in 3D chains) are fully compatible.
For details, see the thread on "fully supersymmetric chains".



RESOLUTION RULE vs RESOLUTION TECHNIQUE vs REPRESENTATION TECHNIQUE

Whereas I have given the phrase "resolution rule" a precise, purely logical definition, it is not the case for the word "technique". Fist, I think two different kinds of techniques should be defined.

By "resolution technique", we generally mean a procedure that can help get closer to a solution.

Definition: T&E is the following procedure: recursively make ad hoc hypotheses on something that you do not know to be true at the time you make them, explore their consequences, with the possibility of retracting each of these hypotheses and their consequences if they lead to a dead end.
T&E is a resolution technique is the above sense.
Some colouring techniques, depending on what one means by this, can be resolution techniques in this sense.

By "representation technique", we generally mean a representation that can be the support for several resolution techniques.
Marking conjugate candidates with upper and lower case letters is a technique in this sense. It can be used for finding Nice Loops or AICs.
The rn- and cn- spaces I've introduced in HLS are representation techniques in this sense.


First applications of these notions:

Theorem: Trial and Error is a resolution technique but not a resolution rule
Proof: since T&E can be applied any time a cell has at least two candidates, if it was expressible as a resolution rule, the condition of this rule could only be:
"if a cell has at least two candidates".
But it is obvious that, from such a condition, no general conclusion can be obtained (no value can be asserted and no candidate can be eliminated).
This should put an end to many factitious debates about T&E and to the absurd assimilation of some types of resolution rules with T&E, e.g. under the pretext that their proof uses reasoning by cases.

Similarly, tabling is a resolution technique, but not a resolution rule. Because it is based on the ad hoc propagation of complementary hypotheses, tabling is a (limited) form of T&E.


A given resolution rule can, in practice, be applied in various ways. Said otherwise, different resolution techniques can sometimes be attached to a single resolution rule
This distinction is important. Forgetting it may lead to the absurd assimilation of an algorithm and a logical formula - two notions that do not belong to the same universe of discourse (CS vs logic).

An example of this has been given in the xyt-thread, where my initial technique of drawing arrows from the right linking candidate of a cell to the left linking candidate of the next cell and John's modified xy-colouring technique can both be used to spot xyt-chains.
As another alternative, within each associated technique, the search for an xyt-chain can start from a target cell (e.g. because one would like to eliminate some candidate in this cell) or from the first cell in the chain or from a previously obtained shorter chain.


Validity of a resolution rule is based on logic. Validity of a technique is based either on its conformity with the underlying resolution rule if there is one OR on the algorithm describing it.
For a technique that is not based on a resolution rule, it may be very difficult to guarantee that it doesn't amount to T&E.

Most, but not all, of the usual techniques could be re-written as resolution rules.



EXAMPLES OF TOPICS THAT COULD BE USEFULLY DEBATED IN THE PRESENT FRAMEWORK

How can we compare the complexities of two resolution rules?

How can we compare the complexities of two resolution techniques associated to the same resolution rule?

Is there a simple relation between the logical syntactic complexity of a rule and the difficulty of spotting its instantiations?

Given a resolution rule, which technique is best fitted to spot its occurrences in a real grid?

What are the relevant properties of a resolution rule? (being based on a homogeneous pattern, being 2D,…)

What are the relevant properties of a resolution technique? (being based on a resolution rule; for a chain: being more or less extensible in both directions, being based on a target cell, …)

This list is obviously non exhaustive.
denis_berthier
2010 Supporter
 
Posts: 3972
Joined: 19 June 2007
Location: Paris

Postby denis_berthier » Wed Sep 26, 2007 7:37 am

This is an improved and extended version of my first post in this thread. As everybody is looking for solutions based on "pure logic", I think my Trial and Error theorem should raise some interest.

FIRST ORDER LOGIC (FOL)

First order logic (FOL), also called "predicate calculus", is the background of the framework described here. If you don't know anything about it, an elementary reference is:

http://en.wikipedia.org/wiki/First-order_logic

First order logic should not be confused with common sense logic. One thing you should keep in mind is that if something is not illogical in the commonsense of this word, it doesn't entail it can be expressed in FOL.
The reason is that FOL has a strict formalism.
A first order theory is restricted to a well defined topic, which you may discuss only in a well defined language.
In particular, it doesn't include other mathematical theories, such as number theory or set theory, and FOL doesn't allow arbitrary mathematical operations.

As Algebra, Differential Geometry or lots of other topics, Mathematical Logic is a specialised field of mathematics. Most of the mathematicians are very specialised in their own field and they don't know much about other fields. Moreover, generally, they don't know anything about FOL.
The reason is that they work within the framework of "naive set theory", where they can build sets of subsets, sets of functions and so on, they can use quantifiers on functions, subsets, subsets of subsets and so on, and they don't want to be restricted in these possibilities.
But FOL doesn't allow such constructions.

FOL appears naturally as the proper formalism for Sudoku because everybody is looking for solutions using only "pure logic". FOL provides the kind of restrictions able to give a precise meaning to the idea that a solution has to be obtained by "pure logic".



RESOLUTION RULES AND RESOLUTION PATHS

Definition: A resolution rule is a formula of first order logic (or predicate calculus), in the language of Sudoku, that satisfies the two conditions:
- it is a logical consequence of the Sudoku axioms,
- it is written in the condition => action form, where "action" can only be the assertion of value(s) and the negation of candidate(s) in some cells associated to the pattern (the target cells).
The condition is also called the pattern of the rule.


The language of Sudoku is built on a very limited number of primary predicates and associated atomic formulæ:
n1 = n2
r1 = r2
c1 = c2
b1 = b2
value(n, r, c)
candidate(n, r, c)
where n, r, c,… are variables (and = is the usual equality sign).

This is a natural modelling choice, justified by the fact that they constitute the minimum common to all players.
According to Occam's razor principle, any addition to these basic predicates should be justified by some necessity.

Auxiliary predicates can of course be defined. Formally, they are just formulæ with free variables.
For instance, share-a-unit(r1, c1, b1, r2, c2, b2) is defined as: r1 = r2 or c1 = c2 or b1 = b2
Other predicates that can be easily defined are:
- bivalue
- bilocation (or conjugate)
- nrc-linked
- nrc-conjugate,…

If the above definition, the condition part (which is also a FOL formula in itself) describes a possible situation on a grid, e.g. a Naked-Triplets, an xy4-chain, an nrczt5-chain, …


A resolution rule is proven once and for all, and the way it has been proven, by a any valid mathematical methods, has no impact of any kind on its logical validity.
In particular, whether or not reasoning by cases has been used in the proof is totally irrelevant to its validity.
Being logically valid doesn't prevent a rule from being more or less easy to apply, but this has nothing to do with the way it has been proven. The problems
- of proving a rule once and for all (generally these proofs are very straightforward)
- and of spotting the occurences (instantiations) of its defining pattern on a real grid
are totally independent.


Definition: A resolution path is a sequence of occurences of resolution rules with their conclusions - with no additional ad hoc pieces of reasoning. A resolution path is thus the mathematical proof of the solution.
Unfortunately, examples given in forums very rarely satisfy this definition, although they often could.
This is mainly due to:
- ambiguous examples with incomplete resolution paths and inadapted notations,
- a generalised confusion, on this forum and others, between the description of a purely factual situation (e.g. the existence of a general link in my sense, i.e. a unit being shared between two cells, or of aconjugacy link) and the way this can be used in an inference step. In this regard, the notions of strong and weak links are certainly the most confusing ones that can have been introduced - leading e.g. to hyper-realistic debates on whether a strong link is also a weak link.



The above definitions don't have to be understood in an integrist sense - which doesn't mean they must be relaxed:

1) A resolution rule can be written formally in FOL, but it can also be written in plain English. In this case, to be sure there is a real resolution rule behind it, the only three things you have to be careful about is:
- respect the condition => action form;
- in the condition part, or when you define new auxiliary predicates from the elementary ones (e.g. a new type of chain), use only boolean combinations and quantifications; always use only precise factual conditions (and not undefined concepts such as weak or strong link);
- in the action part, put only assertion of values and deletion of candidates; there can be no "or".

2) Logical formulæ can use auxiliary predicates, which can be given any convenient intuitive form.
The graphical representations I use for chains ARE logical formulæ - and they are very short.

As an elementary example, here are the formulæ for an xy4, and hxy-rn4 and an hxy-cn4 chain:
rc /- {1 2}* - {2 3} - {3 4} - {4 1}*
rn/- {1 2}* - {2 3} - {3 4} - {4 1}*
cn/- {1 2}* - {2 3} - {3 4} - {4 1}*

3) Basic AICs (what I call nrc-chains) are resolution rules.
Of course, xy, xyt, xyzt, xhy, hxyt, hxyzt, nrc, nrct, nrcz and nrczt chains are resolution rules.

4) As soon as you consider that you can shift the rows or columns, you have variables and you are already doing FOL (and not propositional calculus). There is no difference between Math and Logic in the use of variables. A variable in a formula is the same thing as a variable in a polynomial.

5) There is no reason to broaden the definition of a resolution rule (e.g. by accepting "or statements" in the action part; that would radically change the framework).
The problem is not to extend the definition but to find additional, not necessarily formal, conditions a valid resolution rule should satisfy in order to be useful in practice.




RESOLUTION RULE vs RESOLUTION TECHNIQUE vs REPRESENTATION TECHNIQUE

Whereas I have given the phrase "resolution rule" a precise, purely logical definition, it is not yet the case for the word "technique". First, I think two different kinds of techniques should be defined.

A "resolution technique" is a procedure that can help get closer to a solution by eliminating candidates or asserting values.

Some colouring techniques, depending on what one means by this, can be resolution techniques in this sense.

A "representation technique" is a representation that can be the support for several resolution techniques.
Marking conjugate candidates with upper and lower case letters is a technique in this sense. It can be used for finding Nice Loops or AICs.
The rn- and cn- spaces I've introduced in HLS are representation techniques in this sense.


One important notion is that of a resolution technique being associated to or being the implementation of a resolution rule.
A given resolution rule can, in practice, be applied in various ways. Said otherwise, different resolution techniques can sometimes be attached to a single resolution rule
This distinction is important. Forgetting it may lead to the absurd assimilation of an algorithm and a logical formula - two notions that do not belong to the same universe of discourse (CS vs logic).

A first example of this has been given in the xyt-thread, where my initial technique of drawing arrows from the right linking candidate of a cell to the left linking candidate of the next cell and John's modified xy-colouring technique can both be used to spot xyt-chains.
As another alternative, within each associated technique, the search for an xyt-chain can start from a target cell (e.g. because one would like to eliminate some candidate in this cell) or from the first cell in the chain or from a previously obtained shorter chain.

Another example of this has been given in my post 319 in the "fully supersymmetriic chains" thread, where several implementations of the nrc(z)(t) chains have been described.

A formal definition is the following:

A resolution technique is the implementation of a resolution rule if for any partially filled grid with candidates, applying the technique to this grid has the same final effect (values asserted and candidates deleted) as what can be logically concluded by the resolution rule.



Validity of a resolution rule is based on logic. Validity of a technique is based either on its conformity with the underlying resolution rule if there is one OR on the algorithm describing it.
For a technique that is not based on a resolution rule, it may be very difficult to guarantee that it doesn't amount to T&E.

Most, but not all, of the usual techniques could be re-written as resolution rules.



TRIAL AND ERROR

Definition: T&E is the following depth-first search procedure: recursively make ad hoc hypotheses on values that you do not know to be true at the time you make them, write all their consequences (candidates eliminated and values added), with the possibility of retracting each of these hypotheses and their consequences if they lead to a dead end.

T&E is a resolution technique in the above sense.
In fact, T&E is a family of resolution techniques, depending on how you write "all the consequences" of an hypothesis, i.e. on which family of resolution rules you adopt for this. This is not really important, because such consequences are only used to prune the tree of possibilities, i.e. to restrict the search.
There may also be variants of T&E depending on how many solutions you want in case the puzzle has several: this will be defined by the exit conditions of the algorithm.

Being an algorithm, Trial and Error can obviously not be a resolution rule: the two pertain to different domains of speech (algorithmic vs logic). But we have the stronger:

Theorem: Trial and Error is a resolution technique that cannot be the implementation of any resolution rule

Proof: since T&E can be applied any time a cell has at least two candidates, if it was expressible as a resolution rule, the condition of this rule could only be:
"if a cell has at least two candidates".
But it is obvious that, from such a condition, no general conclusion can be obtained (no value can be asserted and no candidate can be eliminated).

Recently, I came with a stronger version and an alternative proof.

This is an improved and extended version of my first post in this thread.
As everybody wants solutions based on "pure logic", I think my Trial and Error theorem should raise some interest.

T&E theorem (stronger version): Trial and Error is a resolution technique that cannot be the implementation of any set of resolution rules.

Second proof: if a puzzle has more than one solution, T&E is guaranteed to find one (or several or all, depending on the exit condition we put on the T&E algorithm - again, T&E is a family of algorithms, and the theorem applies to any variant).
On the contrary, as a set S of resolution rules can only lead to conclusions that are logical consequences of the axioms and the entries, if a puzzle has several solutions, S cannot find any.
e.g. if there are two solutions such that r1c1 is 1 in the first and 2 in the second, S cannot prove that r1c1=1 (nor that r1c1=2). It can therefore find none of these solutions.
q.e.d.[/url]
Last edited by denis_berthier on Mon Jul 28, 2008 11:02 am, edited 1 time in total.
denis_berthier
2010 Supporter
 
Posts: 3972
Joined: 19 June 2007
Location: Paris

Postby denis_berthier » Fri Oct 12, 2007 6:17 am

Ruud
At some point in time, you said that before solving a puzzle, you verified in an automated solver that it has a unique solution.
But you never answered my question: how can you do this?

More precisely, if you don't use Full T&E, i.e. recursive search (pruned by the available resolution rules), but only a limited form of it (i.e. depth one T&E), how can you check uniqueness?

I hope you'll answer this question, because my recursive definiton of T&E has been debated (in the heinous atmosphere of Eureka - a forum I've definitively quitted). Some have used this as a pretext to try to empty my T&E theorem of its content and my notion of a resolution rule of its full scope (i.e. the proper formalisation of what we mean by a "pure logic solution").
But nobody has never been able to provide an example where using a non recursive form of T&E would be meaningful.
denis_berthier
2010 Supporter
 
Posts: 3972
Joined: 19 June 2007
Location: Paris

Postby keith » Fri Oct 12, 2007 11:28 pm

At some point in time, you said that before solving a puzzle, you verified in an automated solver that it has a unique solution.

Denis,
Robert Woodhead (MadOverlord) does just this in his Sudoku Susser code. I don't know how. As I understand it, you can test uniqueness using algorithms that no human solver would use.

I have also seen posts where some are able to count the number of solutions that a non-unique puzzle has. I don't know how they do this, but I am sure it is not by exhaustive trial and error.

I think that a search on <sudoku algorithm> or <sudoku programmer forum> might give you productive leads.

Keith
keith
2017 Supporter
 
Posts: 221
Joined: 03 April 2006

Postby re'born » Fri Oct 12, 2007 11:41 pm

denis_berthier wrote:Ruud
At some point in time, you said that before solving a puzzle, you verified in an automated solver that it has a unique solution.
But you never answered my question: how can you do this?

I've always been under the impression that Ruud uses Dancing Links to check a puzzle's validity. Ruud?
re'born
 
Posts: 551
Joined: 31 May 2007

Postby m_b_metcalf » Sat Oct 13, 2007 2:08 am

keith wrote:
I have also seen posts where some are able to count the number of solutions that a non-unique puzzle has. I don't know how they do this, but I am sure it is not by exhaustive trial and error.


Well, others can speak for themselves, but knowing how many solutions a non-unique puzzle has is not something that is often required, so brute force works perfectly well in most cases. It can be speeded up markedly by first applying some logic. That increases the number of givens and reduces the number of candidates. For very sparse non-unique puzzles, that might even be critical in order to get the count in a sensible time.

What is much more frequently required is to check whether a candidate puzzle has just one or more than one (i.e. at least two) solutions. That has to work very fast and it can pay to apply lots of logic before going into brute force.

Regards,

Mike Metcalf
User avatar
m_b_metcalf
2017 Supporter
 
Posts: 13584
Joined: 15 May 2006
Location: Berlin

Postby denis_berthier » Sat Oct 13, 2007 4:57 am

Keith, re'born, Mike

Thanks for your answers.

In my question, there were indeed two different ones.

1) I took the proof of uniqueness as an example where full T&E (full recursive search, of course pruned by logic rules) seemed to be unavoidable.
I don't know how Sudoku Susser tests uniqueness. Aren't the "algorithms that no human solver would use" more or less equivalent to T&E (pruned by unlimited propagation of constraints)? Considering how little we know about uniqueness, I can't imagine a method that could test it on any puzzle without using some form of T&E.

DLX is a very general and poweful technique that can be used in many ways. Ruud himself used it successfully to implement my nrczt-tagging algorithm (or something close to it). But it can also be used to implement full T&E. Ruud's answer would be very interesting on this point.

One of the reasons for my introducing the notion of a resolution rule is that with them we are sure we are not using disguised forms of T&E. I consider that they are the proper formalisation for a "pure logic solution".
When we speak at the level of the algorithms, making the difference may be very hard - I have met a recent example by Darryl on the Programmers Forum.
Moreover, a small variation in an algorithm can radically change the underlying logic.
This is not to say we shouldn't use algorithms in our solvers - but we should be clear about what these algorithms are the implementation of.


2) For the second, more general question, let me re-state it.
Do you know any situation in which using depth one T&E would be meaningful?
By depth one T&E, I mean making a priori only one hypothesis (still in addition to logic, of course) and merely abandoning the current search path if it leads to no contradiction after its logic consequences have been computed (i.e., in my approach, after resolution rules have been applied to it). Of course, this process can be applied again with another hypothesis, but still at level one.
It seems to me that there can be no benefit in such a strategy:
- "pure logic", as it is generally understood, is given up (although there is nothing "illogical" here, in the common sense of this word); as a result, no purely logical explanation of the solution, based on well defined patterns, can be given; if "pure logic" has been given up once for a puzzle, why not give it up two or more times in the same search path?;
- the guarantee of finding a solution is lost.

The reason why I'm highly interested in this second question is that the proof of my T&E theorem doesn't apply to this case (the certainty of getting a solution or proving there is none is essential in my proof). And I'd like to know if this should be considered as an important limitation.
denis_berthier
2010 Supporter
 
Posts: 3972
Joined: 19 June 2007
Location: Paris

Postby re'born » Sat Oct 13, 2007 8:50 am

denis_berthier wrote:2) For the second, more general question, let me re-state it.
Do you know any situation in which using depth one T&E would be meaningful?
By depth one T&E, I mean making a priori only one hypothesis (still in addition to logic, of course) and merely abandoning the current search path if it leads to no contradiction after its logic consequences have been computed (i.e., in my approach, after resolution rules have been applied to it). Of course, this process can be applied again with another hypothesis, but still at level one.
It seems to me that there can be no benefit in such a strategy:

I think a depth one T&E is included in what is normally called Tabling. As far as being able to solve puzzles, I know that Tabling is useful. But maybe you mean something different when you ask if it is meaningful.:?:
re'born
 
Posts: 551
Joined: 31 May 2007

Postby gsf » Sat Oct 13, 2007 4:33 pm

denis_berthier wrote:Do you know any situation in which using depth one T&E would be meaningful?

depth n T&E can be used to classify 9x9 sudoku

depth 1 and 2 T&E are basically the -q1 solving method in my solver, typically used to classify hard puzzles,
and not to provide human compatible solutions
T&E depth is equivalent to the number of concurrent -q1 propositions

for |sudogen0.txt|=19352
8401 are solved with { naked-singles hidden-singles }
the remaining 10951 are solved with depth 1 T&E { naked-singles hidden-singles } propositions

the hardest known 9x9 require depth 2 T&E { naked-singles hidden-singles locked-candidates } propositions
(6 known puzzles, easter monster and 5 relatives)
the remaining hardest can be solved with depth 2 T&E { naked-singles hidden-singles } propositions
(3957 known puzzles)

1367 of the hardest can be solved with depth 1 T&E { naked-singles hidden-singles locked-candidates } propositions
using { naked-singles hidden-singles locked-candidates naked/hidden-tuples x-wing/etc x-cycles y-cycles } propositions
yields 163 known puzzles that require depth 2 T&E

add in y-knots (a brute force chain method) and they all solve with depth 1 T&E

T&E depth is also related to backdoors
only 39 known puzzles have { naked-singles hidden-singles [ locked-candidates ] } backdoor size 3
all the rest have { naked-singles hidden-singles } backdoor size { 0 1 2 }
so, although cumbersome, a set of non-recursive rules could be devised to solve all know sudoku
the rules could be partitioned by { naked-singles hidden-singles } backdoor size
and applied in { 0 1 2 3 } order
gsf
2014 Supporter
 
Posts: 7306
Joined: 21 September 2005
Location: NJ USA

Postby denis_berthier » Sat Oct 13, 2007 4:47 pm

re'born wrote:I think a depth one T&E is included in what is normally called Tabling. As far as being able to solve puzzles, I know that Tabling is useful. But maybe you mean something different when you ask if it is meaningful.:?:

I remember Ruud saying something like "tabling is not for human solvers".

By "meaningful", I mean:
if one uses T&E once, thus giving up the idea of a "pure logic solution", how could one justify refusing to use it recursively?
denis_berthier
2010 Supporter
 
Posts: 3972
Joined: 19 June 2007
Location: Paris

Postby denis_berthier » Sat Oct 13, 2007 5:02 pm

gsf wrote:
denis_berthier wrote:Do you know any situation in which using depth one T&E would be meaningful?

depth n T&E can be used to classify 9x9 sudoku

depth 1 and 2 T&E are basically the -q1 solving method in my solver, typically used to classify hard puzzles,
and not to provide human compatible solutions
T&E depth is equivalent to the number of concurrent -q1 propositions


I think this is besides the point. This classification is valid for a particular set of rules (Singles), but (as your examples show) it is changed if we add rules (and not necessarily very complex ones). The same remarks apply to the backdoors.

My question is about whether it is meaningful, for a human solver using pure logic rules and not knowing in advance the difficulty of the puzzles, to define and use a technique such as T&E (pruned by logic) with depth limited a priori to one for all the puzzles. See my previous post.
denis_berthier
2010 Supporter
 
Posts: 3972
Joined: 19 June 2007
Location: Paris

Postby gsf » Sat Oct 13, 2007 7:56 pm

denis_berthier wrote:My question is about whether it is meaningful, for a human solver using pure logic rules and not knowing in advance the difficulty of the puzzles, to define and use a technique such as T&E (pruned by logic) with depth limited a priori to one for all the puzzles. See my previous post.

ah, so that's what meaningful means

so { naked-singles hidden-singles } aren't resolution rules?
gsf
2014 Supporter
 
Posts: 7306
Joined: 21 September 2005
Location: NJ USA

Postby Red Ed » Sat Oct 13, 2007 9:13 pm

denis> It seems to me that there can be no benefit in such a strategy

Perhaps this comes down to a preference (for some people) for breadth-first search over depth-first search. Since the backdoor size of most puzzles is <2, it might be intuitively preferable (for some people) to try all possible single-value backdoors before diving deeper. So, if we're comparing recursive T&E (= DFS) to { try all at depth 1, then all at depth 2, then ... } (= BFS) then I don't seem any problem. But I agree that just { try all at depth 1, then give in } (= truncated BFS) is a bit silly.
Red Ed
 
Posts: 633
Joined: 06 June 2005

Postby denis_berthier » Sun Oct 14, 2007 1:26 am

gsf wrote:
denis_berthier wrote:My question is about whether it is meaningful, for a human solver using pure logic rules and not knowing in advance the difficulty of the puzzles, to define and use a technique such as T&E (pruned by logic) with depth limited a priori to one for all the puzzles. See my previous post.

ah, so that's what meaningful means

so { naked-singles hidden-singles } aren't resolution rules?


I can't see any relation between your question and what precedes it.
You obviously know that NS and HS are resolution rules.
denis_berthier
2010 Supporter
 
Posts: 3972
Joined: 19 June 2007
Location: Paris

Postby denis_berthier » Sun Oct 14, 2007 1:28 am

Red Ed wrote:Perhaps this comes down to a preference (for some people) for breadth-first search over depth-first search. Since the backdoor size of most puzzles is <2, it might be intuitively preferable (for some people) to try all possible single-value backdoors before diving deeper. So, if we're comparing recursive T&E (= DFS) to { try all at depth 1, then all at depth 2, then ... } (= BFS) then I don't seem any problem. But I agree that just { try all at depth 1, then give in } (= truncated BFS) is a bit silly.


I see we agree on the essential.
I just wonder whether BFS, with all the memory it requires, is humanly feasible (unless we are very close to the solution).
denis_berthier
2010 Supporter
 
Posts: 3972
Joined: 19 June 2007
Location: Paris

Next

Return to Advanced solving techniques