## Alternating Inference Chains

Advanced methods and approaches for solving Sudoku puzzles
There is no doubt that one can use any of the following boolean operators exclusively to convey the same idea:
OR, AND, IMPLIES.

AW wrote:Allowing that the general pattern for chaining is (where Q is to be taken generally as "some conclusion") :

((A implies Q) and (~A implies Q))

That is a fine start point, and using it as a start point will naturally lead one to want to write how to get there using similar language. The start point for an AIC is:

Assuming we want to prove Q cannot exist
Q -- {puzzle conditions}.

To my way of thinking, a good language choice should naturally mirror:
{puzzle conditions}. What language choice is a better mirror?

Clearly, since all language choices can be replaced with equivalent language choices, the choice of mirror is in the eye of the beholder. I merely see the language choice of strong and weak as mirroring the rules of the game:

Each large container can have no more than one of each candidate. - This is a weak link.

Each large container must have at least one of each candidate. - This is a strong link.

The unwritten rules:
Each cell must have at least one candidate. - A strong link.
Each cell can have no more than one candidate. - A weak link.

Certainly, one can write equivalent mirrors, using AND or using IMPLICATION - either exclusively using those operators, or using some combination of them.

Language choices for examining a problem are important, not because we cannot express the idea with other representations, but rather because the choice of language, if done carefully, serves to illuminate rather than obfuscate. All language choices have a price - they will always make some ideas harder to express, other ideas easier to express. For me, language choice comes down to:
What is more concise?
What is more clear?
What allows me to get done what I want to get done?

Perhaps the choice I prefer is best illustrated with an example:

Mirror image, using AIC, for a naked double, a hidden double, and an X wing:
1) Recognize on the puzzle grid the following:
a)A==B
b)C==D
c)B--C
d)D--A

2)write:
A==B -- C==D -- A.

3)Conclusions:
A==D
B==C

4) Conclusions are transparent, and all one needs to justify all three patterns. IMHO, this is fairly simple, and awfully elegant. Of course, those are arbitrary evaluations. As noted previously, 1c and 1d above are puzzle independent, and if we are analyzing a standard nxn sudoku, need not be listed.

Using implications, can one write down all the information considered, and reach alll the possible conclusions, as efficiently? more efficiently?
If so, then it is as good a language choice, or perhaps better.
Steve K

Posts: 98
Joined: 18 January 2007

As far as language choice is concerned, I, being the hopelessly conservative type, have great faith in tradition.

a) (~A implies B)
b) (~C implies D)
c) (B implies ~C)
d) (D implies ~A)

Applicable transformations in this case :
1) From (P implies Q) we can infer (~Q implies ~P).
2) From ((P implies Q) and (Q implies R)) we can infer (P implies R).

Whereby we may establish :
From ((~A implies B) and (B implies ~C)) we have (~A implies ~C)
From ((~A implies ~C) and (~C implies D)) we have (~A implies D)
And from (D implies ~A) we have (A implies ~D)
We can go on to establish that (~P implies Q) and (P implies ~Q), for the pairs (A,B), (A,D), (C,B) and (C,D). And also both (P implies Q) and (Q implies P) for the pairs (A,C) and (B,D).

This is not more concise, and perhaps slightly less than obscure only to those who have experience with "logical constructs". It may even seem quite pointless to the untrained eye .

- Oblivious to the nature of the chain. X-wing, pairs, Y-wing, colouring, etc., are all covered.
- Not limited to proving that Q cannot exist. Depending on the premises, the conclusion might well be that Q in fact does exist. Mind you, proving that Q does exist is usually come by as a side-effect of proving that something else doesn't.
- Invariant building blocks. We're not tempted to "upgrade" (D -- A) to (D == A). We just add (~D implies A) to our known implications.
- Can be safely and efficiently broken up into two steps, which may be carried out in relative independance : (1) finding applicable implications and (2) working out the conclusions. Step (2) is just a matter of "following the implication chains" established in step (1).
- Simplifies using more complicated relationships. As in, when C is a candidate group, it is often just an exercise in frustration to keep track of C. When its only real use is to establish that (B implies D), just add in (B implies D) and forget about C altogether during step (1).

Divide (simplify!) and conquer always works well for me. And in fact, establishing that both (P implies Q) and (Q implies P) leads to a major simplification : if you've translated your sudoku into a purely logical problem, you can cash in on the fact that (P is Q) and safely substitute P for Q (or Q for P if you prefer) in all occurrences.

Much like constraint reductions (e.g., line-box reduction), candidates may also be "reduced" under an appropriate framework. Thus, the terminally elegant conclusions of our four premises, whereby we may forget about B, C and D altogether :

e) A is C
f) B is D
g) A is ~B
AW

Posts: 27
Joined: 31 January 2007

Off topic non-sense deleted
Last edited by Steve K on Sun Feb 04, 2007 4:35 pm, edited 1 time in total.
Steve K

Posts: 98
Joined: 18 January 2007

I think it has already been said that the choice of the better language is mostly in the eye of the beholder. I'm partial to the logical approach since it is easier for me. And traditionnaly, logicians have prefered implication over other logical operators. I happen to think that's not a bad idea, it suits me.

I am, as you have noticed, trying to move the discussion away from the means of representation, and back along the lines of "what one can accomplish or prove", and how one goes about it.

I was hoping to establish that (A implies C) is more general than ((~A or ~B) and (B or C)). You may want to clarify what you mean by "means" in "(A--B==C) means (A=>C)". Indeed, although (A implies C) can be infered from ((~A or ~B) and (B or C)), they are not equivalent; the truth tables are different : (A implies C) is valid when (A=true, B=true, C=true) and (A=false, B=false, C=false), whereas the other is not. (A implies C) is therefore more general, in the same sense that (A or B) is more general than (A xor B). Seeing as the (A implies C) relationship is less restrictive, it is therefore easier to establish.

Which is my first reason for prefering it *when chaining*. It is a minimal relationship, given the statement of the problem, ((A implies Q) and (~A implies Q)). Of course, being less restrictive, it may be less useful when dealing with other problems.

The second reason was that it involves only two terms. "That one can safely not tell someone what information is used to justify the implication" does indeed make little sense, as you say. But on the other hand, "having reached a conclusion, one can just write that conclusion" is not far from the point. What can you conclude from just (A--B==C), without any additional information, other than the aforementioned (A implies C) ? As it is written, it does not hold without the middle term B, whereas (A implies C) does hold on its own, without a middle term. Once you've established (A implies C), you no longer need to keep track of the "explanation" (the way you found it). With (A--B==C) you need to keep track of all three terms until you manage to draw some conclusion that resolves or extends one or more of them.

Take the following situation (the notation r1c1v1 designates one of our 729 candidates, namely the candidate in row 1, column 1, having the value 1) :

In row 1, all candidates with value 1 have been eliminated, save for r1c1v1, r1c2v1, r1c3v1 and r1c7v1.
In box 1, all candidates with value 1 have been eliminated, save for r1c1v1, r1c2v1, r1c3v1, r2c1v1, r2c3v1 and r3c1v1.

Keeping track of B seems somewhat painful :
{r2c1v1}--{r1c1v1, r1c2v1, r1c3v1}=={r1c7v1}
{r2c3v1}--{r1c1v1, r1c2v1, r1c3v1}=={r1c7v1}
{r3c1v1}--{r1c1v1, r1c2v1, r1c3v1}=={r1c7v1}

It seems easier to my mind to just note the implications and move on :
{r2c1v1} implies {r1c7v1}
{r2c3v1} implies {r1c7v1}
{r3c1v1} implies {r1c7v1}

Now, in what other ways can we establish that (A implies C) ?

P.S.:

As an aside, to be honest, I think the better language choice for kicking off a sudoku puzzle is :

"Each [of the 324 constraints] must solve with exactly one true candidate."
AW

Posts: 27
Joined: 31 January 2007

Off topic hijack deleted
Last edited by Steve K on Sun Feb 04, 2007 4:34 pm, edited 1 time in total.
Steve K

Posts: 98
Joined: 18 January 2007

I hope this will be clear. I apologize if it is not.

Here are the truth tables, lined up for comparison (the second has been conveniently extended since it has fewer terms). "X" marks the two lines where they differ.

Code: Select all
`((~A or  ~B) and (B or  C))          (A implies C)  FT  F  FT   F   T  T  T      X      T    T    T   FT  F  FT   F   T  T  F             T    F    F   FT  T  TF   T   F  T  T             T    T    T   FT  T  TF   F   F  F  F             T    F    F   TF  T  FT   T   T  T  T             F    T    T   TF  T  FT   T   T  T  F             F    T    F   TF  T  TF   T   F  T  T             F    T    T   TF  T  TF   F   F  F  F      X      F    T    F `

As you say, "seeing A--B==C, one knows A=>C is valid" : the second can be infered from first since the second is true in all cases where the first is true. But not the other way around, as the first is not always true when the second is. The second is therefore more general : there is a logical space that allows for situations in which the second will be true, and the first will not be applicable. "Not applicable" does not mean "turns out false" : if the first could be found to apply in a given case, and did turn out false, it would supersede the second.

As you say, clearly, under the first relationship, the two different lines are nonsense. That is because the first relies on the intermediate term B. What this boils down to is, there are cases where we will be able to establish that (A implies C), but there will be no way to translate the deduction into ((~A or ~B) and (B or C)). This does not mean that, in some puzzle, we would be able to find something that qualifies as ((~A or ~B) and (B or C)), and realize that, even though it is false, (A implies C) still works. It means that we will not be able to express the relationship in terms of ((~A or ~B) and (B or C)), presumably, because no simple B term can be found. Otherwise, the puzzle would be inconsistent. There will be, as you say, "some other intermediate step to get there", and this step will not be reducible to some term "B". Hence, (A implies C) is more general.

This is parallel to using (B xor C) rather than (B or C) in the relationship ((~A or ~B) and (B or C)). It's not that (B xor C) doesn't work : it does. It's that (B or C) is more general and covers cases where (B xor C) is not applicable. As can be seen from :

Code: Select all
`((~A or  ~B) and (B or  C))        ((~A or  ~B) and (B xor C))  FT  F  FT   F   T  T  T            FT  F  FT   F   T  F  T   FT  F  FT   F   T  T  F            FT  F  FT   F   T  T  F   FT  T  TF   T   F  T  T            FT  T  TF   T   F  T  T   FT  T  TF   F   F  F  F            FT  T  TF   F   F  F  F   TF  T  FT   T   T  T  T      X     TF  T  FT   F   T  F  T   TF  T  FT   T   T  T  F            TF  T  FT   T   T  T  F   TF  T  TF   T   F  T  T            TF  T  TF   T   F  T  T   TF  T  TF   F   F  F  F            TF  T  TF   F   F  F  F `

Otherwise, there would be no reason to prefer (B or C) over (B xor C), since (B xor C) is more restrictive and supersedes the other when it is applicable. Just as there is no reason to prefer (A implies C) over ((~A or ~B) and (B or C)) unless there are cases where the former is applicable and the latter is not.

Incidentally, I realize that "easier to establish" was a poor choice of words on my part. Relationships that do not qualify for ((~A or ~B) and (B or C)) are considerably more complicated to establish. "Can be established in cases where the other cannot" would have been a better phrase. Just as ((~A or ~B) and (B or C)) can be established in cases where ((~A or ~B) and (B xor C)) cannot, those cases being somewhat more complicated as they go beyond the "conjugate link".

Thus, for the same kind of reason that B==C goes beyond (B xor C), (A implies C) goes beyond A--B==C. Hence, my preference for it. The fact that it also seems easier to compute is just a bonus.

If one takes "alternating inference chains" to mean precisely chaining patterns that are expresible in terms of A--B==C, then this is not the best place for me to inquire about (A implies C), as it goes beyond AIC. I am interested in a general formulation of "chains", not necessarily limited to "alternating inferences".

And hence, the question I find more interesting : in what other ways can we establish that (A implies C) ?
AW

Posts: 27
Joined: 31 January 2007

I forgot to make one point clear engouh, perhaps. What has been shown is that either :

((~A or ~B) and (B xor C)) is better than ((~A or ~B) and (B or C)) because it is more restrictive.

or

(A implies C) is better than ((~A or ~B) and (B or C)) because it is less restrictive.

To demonstrate that ((~A or ~B) and (B or C)) is better than both, I see only two possibilities :

1) In sudoku, there are no cases where (A implies C) cannot be translated to ((~A or ~B) and (B or C)), and there are cases where ((~A or ~B) and (B or C)) cannot be translated to ((~A or ~B) and (B xor C)).

2) The subject matter (AIC) is precisely such that ((~A or ~B) and (B or C)) is the best possible formulation (in which case I will politely take my subject matter elsewhere).
AW

Posts: 27
Joined: 31 January 2007

Steve K and AW,

I'm only a casual reader here, but it appears the two of you are at best marginally on-topic ... and have hi-jacked this thread for your own discussion. Am I correct?
ronk
2012 Supporter

Posts: 4764
Joined: 02 November 2005
Location: Southeastern USA

Yes Ronk, you are correct. I shall desist. My apologies to all. Thank-you for your forbearance.

The essence of what AW went through great pains to push into my scrambled collection of gray matter is:
(A -- B == C) => (A=>C) but
(A=>C) does not imply (A -- B == C).
Therefor: (A -- B == C) <=>(A=>C) is erroneous.

Therefor, A=>C is more general.

Hopefully, that is a faithful representation of AW's point.
Steve K

Posts: 98
Joined: 18 January 2007

I reworked my definitions of strong and weak inferences in the openning post so that AIC rules can be more readily extended to nets.

I also wanted AIC rules which were more observational, as opposed to assumptive, in nature. For example, one can argue that ascribing a strong inference to two premises that encompass all of the possible solutions for the puzzle can be accomplished without ever making the assumption, "if A is false", and then working out that B must then be true.

The preceding would also be the basis of my objection at renaming strong and weak inferences to OR and NAND links. While they are useful in proving the validity of a chain, AICs should first and foremost represent an observable pattern of indeterminate length requiring no candidate assumptions to uncover.
Myth Jellies

Posts: 593
Joined: 19 September 2005

Myth Jellies wrote:The preceding would also be the basis of my objection at renaming strong and weak inferences to OR and NAND links. While they are useful in proving the validity of a chain, AICs should first and foremost represent an observable pattern of indeterminate length requiring no candidate assumptions to uncover.

I agree that the goal is an observable pattern requiring no candidate assumptions to uncover. To my way of thinking, though, OR and NAND are precisely that vehicle.

No matter how the relationship is expressed, the idea is precisely the same. Strong sets that exist in the puzzle lead to eliminations. The strong sets are an affirmation of truth - something that must exist.

OR expresses that relationship well, IMO. NAND is nothing but a universal reflection of the rule (no more than one of in...).
Steve K

Posts: 98
Joined: 18 January 2007

The following reflects only my narrow opinion.

Since AIC is a perfectly good logical technique without regard to Sudoku, there is no reason not to remember that while thinking in AIC.

Thus, the premises in AIC are nothing other than Boolean variables.

One should, for the purposes of readibility, eventually link the Boolean variables to actual candidate premises. However, there is no limit to how this linkage can be expressed.

Immediately, within AIC, one has the power of many tried and true mathematicians tricks, including but not limited to:

Substitution
Induction
Set theory

Induction is valuable, as it allows, for example, theorem writing for ALL? continuous loop AIC based from the simplest of chains, generally possible from a solved cell. Being able to do this certainly seems valuable, as many techniques now are not only firmly based in AIC, but also their root inspiration is layed bare.

Substitution is also valuable, (and in fact valuable with induction above).
One great value in substitution is that complex candidate premises are easily reduced to a single value. Thus, patterns that seem obscure are suddenly visually obvious, and perfectly easy to write in AIC. Find an example of the power of substitution at:
http://www.sudoku.org.uk/discus/messages/29/3939.html?1176626772

The value of set theory is mostly, I think, incumbent upon allowing substitution. It allows some substitutions to be expressed succinctly and with precision.

I have always been of the opinion that visually apparent is completely in the eye of the beholder, and a self-fulfilling prophecy. This is not even a sudoku related observation, but in fact applies across every discipline.

Allowing the generalizations noted above produces a new set of items that are visually apparent in sudoku. The potential for new techniques based upon such generalizations is only faintly limited. Also, the potential to catalogue each known technique under one unifying umbrella, regardless of complexity, has some intrinsic (albeit mostly academic) value.

BTW: I am also of the opinion that there exists a tremendous overlap of the way I view AIC with SUM. Probably, one can theoretically express their equivalence.
Steve K

Posts: 98
Joined: 18 January 2007

### Re: Alternating Inference Chains

Myth Jellies wrote:Deductions
Quite simply, at least one or the other (possibly both) of the two endpoint candidates (or candidate premises) of an AIC is true.

I must have I missed something here.
I don't see how both endpoint candidates can be true at the same time.
As I see it, a valid AIC starting & ending with strong links does by itself establish a strong inference between the endpoint candidates.
Indeed based on that, this is how I build AICs in my soft, by concatenating smaller AIC or strong links through a weak link.
Can you show a (simple) example of an AIC where both endpoint candidates are true ?
TIA
Jean-Christophe

Posts: 149
Joined: 22 January 2006

### Re: Alternating Inference Chains

Jean-Christophe wrote:Can you show a (simple) example of an AIC where both endpoint candidates are true ?
TIA

(1)r1c1 = (1)r9c1 - (1)r9c4 = (1)r2c4 => r2c23|r1c56 <> 1

There is no reason why r1c1 and r2c4 could not both be 1's and some other 1 in row nine be true.

This is okay as the endpoints represent a strong inference and AIC extension only needs the strong (as opposed to conjugate) inference.
Myth Jellies

Posts: 593
Joined: 19 September 2005

Got it, thanks
Note to myself: think twice before asking a stupid question Jean-Christophe

Posts: 149
Joined: 22 January 2006

PreviousNext