1.
nazaz wrote:denis_berthier wrote:After some tweaking, I also found a puzzle directly contradicting the UR 1.1 rule as I stated it
Given that you are working with your personal non-standard definition of UR1.1 here, ignoring the standard requirement for givens, the example has no relevance to the actual UR1.1 technique. You should make that clearer, e.g. calling your variant fake-UR1.1.
As my example conforms to my definition, I don't see any point here.
2;
nazaz wrote:denis_berthier wrote:Corollary (intuitive version 1): FOL cannot make any difference between a given and a value derived from the givens
Since you already rowed-back on that overgeneralised claim
here, you might want to edit your corollaries to clarify that they apply only to Sudoku FOL models constructed without an IsGiven(n,r,c) predicate or similar.
Once more, you failed to understand what I wrote. This corollary applies to ANY Sudoku model in which "given" really means "given". You can always introduce an isGiven predicate (a name that sounds more Java-like than FOL-like). But as you don't allow it to be used for any inferencing, you could as well call it isCupOfTea: it doesn't in any way represent the role of a given in Sudoku.
Think of it that way: FOL is not smart enough to understand natural language. But FOL was precisely built for that purpose.
In short, you're confusing the natural meaning of "given" which I use in my informal corollary (as I clearly stated it), and a formal predicate to which you fail to give any meaning.
3.
nazaz wrote:denis_berthier wrote:As a second result, any tentative formulation of an UR1.1 rule can only be what I first stated, without any mention of unclued cells.
There's no such consequence for the UR1.1 rule, only for the FOL model that describes it. Any formulation of UR1.1 in a FOL model of Sudoku requires something, like an IsGiven(n,r,c) predicate, to equip the model with the expressive power required to describe a unique rectangle.
See point 2.
Anyway, the only interesting result of all this thread is what I stated before. I repeat it here, so that it isn't clogged with irrelevant objections.
Theorem:
Suppose that (according to definition 1 in the first post of this thread), at some point in a resolution path, one has the following UR1.1 pattern of four cells spanning two rows (say r1 r2), two columns (say c1 c4) and two blocks (say b1 b2):
- Code: Select all
1 . . | 2 . .
2 . . | 13 . .
In addition, suppose the 3 decided cells with 1 or 2 were not given.
Then, at the start there was a non-degenerate UR1 (with an undefined number of guardians in addition to the 3).
- Code: Select all
12 . . | 21 . .
21 . . | 123 . .
Proof: I'll show only that r1c1 must have 2 as a candidate at the start.
If this wasn't true, it would mean that 2 appears as a given in either in r1, c1 or b1. Obviously impossible.
Said otherwise, unless its decided cells are given,
UR1.1 can only appear as a degenerated form of a previous UR1.
[Edit]: One can also consider this result as a reality check. You can try as hard as you can to go against the natural workings of FOL; at some point, it will catch up with you.
.