Allan Barker wrote:I have always assumed this to be the case, without proof, i.e., that an elimination can open new opportunities to eliminate other candidates that could not be cleared before, or in your words, open shorter paths. Further, doesn't this imply that given a group of candidates that cause an elimination, if one or more of the candidates is removed then the remaining candidates (alone) will still cause the same elimination?
Allan, the first assertion needs no proof, as we have lots of examples. The second is much more delicate. It isn't a consequence of the first. All depends on what you mean by "cause the same elimination". If you have a fixed set of resolution rules (a resolution theory) and it doesn't have the confluence property, it might be the case that, after an elimination, some other eliminations are no longer justified by the rules. One example is BUG: an elimination allowed by BUG may become impossible if another rule is applied before it, that destroys the BUG pattern.
In case of your solver, this shouldn't happen, but this might happen at the stage of the recognizer if its set of rules doesn't have the confluence property.
Yes.Allan Barker wrote:What would be interesting is, what portion of the logic used to solve a puzzle can be classified as nrctz chains, etc.
Or one could also consider the maximum rank necessary to solve a puzzle as a new (rough) rating and ask:
- what % of the minimal puzzles can be solved at ranks 1, 2, 3,...?
- are you sure you ever need rank 3 (for this example or for any puzzle)?
- how does the rank correlate with other ratings?
- is there any relation between rank and nrczt-solvability?
Could your solver produce such a rating (discarding the recognizer if necessary), e.g. for gsf's list?
You may consider I'm giving too much importance to rank, but, in my approach, introducing rules for (something analogous to your) rank 2 would add much complexity.
As second order logic is an extension of first order, I'm not really surprised.Allan Barker wrote:I was surprised I could "cut and paste" your first order logic into a second order form.
I don't know how different you imagined they were. The basic rules are the same. The "only" difference is that you have quantifiers on subsets.Allan Barker wrote:I'm beginning to think that first order and second order logic are less different that I had imagined.
But this changes a lot of things:
- from a theoretical POV, you get almost the full power of set theory - which is generally considered too much by logicians,
- from a practical POV, you reach another scale of complexity.
For problems on a finite domain, such as Sudoku, the theoretical difference can be discarded, but not the complexity problems.
Considering that Sudoku(n) is NP-complete, you may be interested by the following:
Wikipedia http://en.wikipedia.org/wiki/Fagin_theorem wrote:Fagin's theorem is a result in descriptive complexity theory which states that the set of all properties expressible in existential second-order logic is precisely the complexity class NP. It is remarkable since it is a characterization of the class NP which does not invoke a model of computation such as a Turing machine.
More generally:
Wikipedia http://en.wikipedia.org/wiki/Descriptive_complexity wrote:Descriptive complexity is a branch of finite model theory, a subfield of computational complexity theory and mathematical logic, which seeks to characterize complexity classes by the type of logic needed to express the languages in them. For example, PH, the union of all complexity classes in the polynomial hierarchy, is precisely the class of languages expressible by statements of second-order logic. This connection between complexity and logic allows results to be transferred easily from one area to the other, facilitating new proof methods and providing additional evidence that the main complexity classes are somehow "natural" and not tied to the specific abstract machines used to define them.
Specifically, each logical system produces a set of queries expressible in it. The queries correspond to the computational problems of traditional complexity theory.
The first main result of descriptive complexity was Fagin's theorem, shown by Ronald Fagin in 1974,[1] which established that NP is precisely the set of languages expressible by sentences of existential second-order logic; that is, second order logic excluding universal quantification over relations, functions, and subsets.
Many other classes were later characterized in such a manner, most of them by Neil Immerman:
First-order logic defines the class FO, corresponding to AC0, the languages recognized by polynomial-size circuits of bounded depth, which equals the languages recognized by a concurrent random access machine in constant time.
First-order logic with a commutative, transitive closure operator added yields SL, which equals L, problems solvable in logarithmic space.
First-order logic with a transitive closure operator yields NL, the problems solvable in nondeterministic logarithmic space.
In the presence of linear order, first-order logic with a least fixed point operator gives P, the problems solvable in deterministic polynomial time.
Existential second-order logic yields NP, as mentioned above.
Universal second-order logic (excluding existential second-order quantification) yields co-NP.
Second-order logic corresponds to PH.
Second-order logic with a transitive closure (commutative or not) yields PSPACE, the problems solvable in polynomial space.
Second-order logic with a least fixed point operator gives EXPTIME, the problems solvable in exponential time.
As we usually consider only the case n=9, it isn't very clear how this relates to the type of rules (first order as in SudoRules or existential second order as in your solver) needed to solve all the puzzles. But as the rules we use are valid for any n...
My approach was different: anticipating the complexity problems and motivated by human solvability, instead of trying to solve all the puzzles with rules of unbounded complexity, I tried to solve as many puzzles as possible with first order rules. For n=9, the answer is 99,99% of the minimal puzzles can be solved with nrczt-chains. How would this percentage evolve with n is an open question. I don't think I'll ever provide any answer, because:
- I've no random generator for 16x16 puzzles (or worse for 25x25, 36x36,...),
- I'm not sure SudoRules would have reasonable computation times and memory requirements for such puzzles,
- there is no obvious theoretical approach of such questions.