Complexity, Trial & Error, Backtracking and Guessing

Advanced methods and approaches for solving Sudoku puzzles

Re: Trial and Error

Postby jfigueras » Sat Jan 28, 2006 8:40 pm

castalia wrote:
jfigueras wrote:This completely misconstrues the meaning of complexity. Complexity measure is related to the dependence of the number of operations on program SIZE, not the number of operations required to handle a specific case.


No, an algorithm's complexity refers to the number of operations it requires as a function of the input size N. If an algorithm is of polynomial complexity, then the number of operations (or, similarly, the run time) is proportional to some power of N. If it's exponential, then the number of operations is proportional to some number to the Nth power. For more precise details, consult Knuth's 3 volumes or Herbert Wilf's very readable "Algorithms and Complexity".

mark


I don't see any difference between what you state now and what I stated. But what you state now (and I agree with that statement) was not exactly what you stated originally. So, as far as I'm concerned--no problem.

John
jfigueras
 
Posts: 2
Joined: 27 January 2006

Re: Trial and Error

Postby castalia » Sun Jan 29, 2006 1:24 am

Jeff wrote:Hi Mark, But, how can the complexity, (T&E or non-T&E index) for other techniques other than Eppstein's nonrepetitive path and cycle be determined for practical application?


Fortunately, most techniques (e.g. x-wings, pairs, triples) are much, much simpler to analyze than the algorithms given in Eppstein's paper. For example, pairs have complexity N^2 (polynomial), triples have N^3 (again, polynomial); both of these results follow from calculating the number of ways M items can be chosen from a set of N items.

In analyzing any algorithm, look for the worst-case scenario and then count the number of operations required by the algorithm for that scenario. The parameter N will refer to the number of digits (i.e. 9 for traditional sudoku). Use this same parameter N to refer to the number of candidates for any unit. Bear in mind that you're not attempting to obtain to an exact number of operations, but merely an order of magnitude as it relates to N.

Once you get some practice, try to reason through this conundrum:

There are a total of 2^N subsets of any given set of N items. Does this mean that the full subset technique in sukoku (i.e. looking for pairs, triples, quads, etc) is exponential?

Mark
castalia
 
Posts: 8
Joined: 22 January 2006

Re: Trial and Error

Postby Wolfgang » Sun Jan 29, 2006 5:30 pm

castalia wrote:For example, pairs have complexity N^2 (polynomial), triples have N^3 (again, polynomial)

IMO finding pairs and triples is of order N^3 and N^4 resp. For pairs you have to check 3*N units with N*(N-1)/2 pairs of cells.
There are a total of 2^N subsets of any given set of N items. Does this mean that the full subset technique in sukoku (i.e. looking for pairs, triples, quads, etc) is exponential?

Yes, i think finding all n-tupels (n < N) is of order N*2^N, because you have to check all subsets.

But what about forcing chains over cells with 2 candidates ? This should be polynomial, if you do it the following way:
For each cell and each candidate in the cell do (max. N*N^2):
Hold 2 lists, the one at the beginnning contains the number you fixed, the other all cells with 2 candidates. Go through the second list. If it shares both a unit and a number from the first list, fix it to the second number and add it to the first list. If there is none, stop it and save the candidate list. You have to do it max. N^2 times with max. N^4 comparisons. After that go through the saved lists. If there is a cell with a unique candidate in all lists, you have found a forcing chain.
So this method to find all this forcing chains should be of order N^9, i.e. polynomial.

If all this is correct, i have a problem with this definition of T&E for normal sudokus, because finding all tuples is obviously easier than finding all forcing chains over cells with 2 candidates.
Wolfgang
 
Posts: 208
Joined: 22 June 2005

Re: Trial and Error

Postby castalia » Mon Jan 30, 2006 12:51 am

Wolfgang wrote:IMO finding pairs and triples is of order N^3 and N^4 resp. For pairs you have to check 3*N units with N*(N-1)/2 pairs of cells.


Indeed. For simplicity's sake, my comments pertained to finding all pairs/triples for a given unit.

Wolfgang wrote:Yes, i think finding all n-tupels (n < N) is of order N*2^N, because you have to check all subsets.


This is perhaps why Eppstein's solver limits subset searches to pairs and triples. Since there are now so many solving techniques, it's difficult to say whether checking for quads would ever be necessary.

Wolfgang wrote:But what about forcing chains over cells with 2 candidates ? This should be polynomial, if you do it the following way:
... So this method to find all this forcing chains should be of order N^9, i.e. polynomial.


I haven't checked yet whether forcing chains are implicit in the various bivalue/bilocation algorithms described in Eppstein's paper, but I suspect they are. The orders of complexity are certainly similar.

Wolfgang wrote:If all this is correct, i have a problem with this definition of T&E for normal sudokus, because finding all tuples is obviously easier than finding all forcing chains over cells with 2 candidates.


Finding all tuples is certainly easier than finding all forcing chains for 9x9 sudoku. But this is indeed not the case as N becomes larger.

One way to handle the situation is to eliminate subset searches beyond triples; this may turn out to be sufficient for 9x9 sudoku, but that's only a conjecture.

Other than by dividing techniques into polynomial/exponential classes, I've found no meaningful way to identify a "trial and error" method. Is it sufficient to say that methods simply shouldn't guess? If so, that would invalidate exhaustive subset testing which is implicitly "guessing" at all the possible n-tuples for each unit (which turns out, as we've seen, to be exponential in N).

If all solution techniques are valid, then sudoku is boring: any orc-ish DLX solver can blast through the toughest 9x9 puzzle in fractions of a millisecond. The goal is to develop a set of good techniques which collectively solve all 9x9 sudokus. I maintain that polynomial complexity is the truest measure of goodness.

Mark
castalia
 
Posts: 8
Joined: 22 January 2006

backtracking v T&E

Postby Moschopulus » Mon Jan 30, 2006 2:20 am

On the difference between backtracking and trial-and-error:

I thought that backtracking is a *systematic* way of running through all possible candidate values in all cells. There are different ways to do this systematically, but the key fact is that there is a systematic procedure for trying each possibility, and going back a step when an impossible situation is reached. A counter is then increased by 1, etc.

On the other hand trial-and-error involves choosing a cell *at random* and choosing a value for that cell *at random* and then investigating the consequences.

The difference is that T&E involves a random choice, and backtracking does not. Backtracking runs through all possibilities in some pre-defined order. T&E could also be called guessing.

Probably these words mean different things to different people, but that's what they mean to me, today.
Moschopulus
 
Posts: 256
Joined: 16 July 2005

Re: backtracking v T&E

Postby Jeff » Mon Jan 30, 2006 4:58 am

Moschopulus wrote:I thought that backtracking is a *systematic* way of running through all possible candidate values in all cells. There are different ways to do this systematically, but the key fact is that there is a systematic procedure for trying each possibility, and going back a step when an impossible situation is reached. A counter is then increased by 1, etc.

Interesting concept, Moschopulus.:D And, it makes good sense to define backtracking as a *systematic* way of running through all possible candidate values in all cells. Under this definition, backtracking means "going to the next cell or next candidate", becomes a procedure that cannot be avoided in sudoku solving, including all pattern recognition techniques; which is quite true. Consider the following examples:

Naked pair - This is a technique to eliminate candidates from the same unit in which there are 2 bivalue cells containing the same candidates.

Deduction Track
  1. Look for a bivalue cell [a,b] (backtrack, ie. goto to next cell, when a polyvalued cell is encountered).
  2. Look for another bivalue cell [a,b] in the same unit (backtrack when such bivalue cell is not found).
  3. Eliminate the "a" and "b" from other cells in the unit.
Continuous xy-chain - This is a technique to eliminate candidates due a chain of bivalue nodes joined by links with common candidates.

Deduction Track
  1. Look for a bivalue cell [a,b] (backtrack, ie. goto to next cell, when a polyvalued cell is encountered).
  2. Propagate by finding another bivalue cell [b,c] in the same unit (backtrack when such bivalue cell is not found)
  3. Repeat step 2 to propagate chain until a bivalue cell [x,a] that completes a loop with the first bivalue cell [a,b] is found (backtrack when the a bivalue cell cannot be found to propagate chain)
  4. Eliminate the linked candidate from other cells of the unit housing each link.
Under this definition of backtracking, each pattern recognition type technique would involve a pre-defined deduction track and backtracking is exercised when such deduction track cannot progress further.

Moschopulus wrote:There are different ways to do this systematically, but the key fact is that there is a systematic procedure for trying each possibility.....

To make a pre-defined deduction track more human executable, some pattern recognition techniques divide the deduction track into 2 parts or more, but the requirement of backtracking is equally valid, eg.

Colouring and x-cycle
  • Deduction track part 1 - Collectively filtering out one candidate digit on the 9x9 grid (highlight cell if found and backtrack if not)
  • Deduction track part 2 - Identify conjugate pairs and possible deductions (backtrack if the deduction track cannot progress further).
B/B plot
  • Deduction track part 1 - Collectively draw links of strong and weak inferences for all candidates on the 9x9 grid. (draw link if found and backtrack if not)
  • Deduction track part 2 - Identify nice loop following a set of nice loop propagation rule and make deduction (backtrack if the deduction track cannot progress further).
POM
  • Deduction track part 1 - Collectively identify all solution patterns on the 9x9 grid. (identify one solution pattern and backtrack for others)
  • Deduction track part 2 - Compare solution set between cells following a set of POM rule and make deduction (backtrack if the deduction track cannot progress further).
Last edited by Jeff on Mon Jan 30, 2006 2:17 am, edited 1 time in total.
Jeff
 
Posts: 708
Joined: 01 August 2005

Re: Trial and Error

Postby Jeff » Mon Jan 30, 2006 6:02 am

Wolfgang wrote:........... if you do it the following way:
For each cell and each candidate in the cell do (max. N*N^2):
Hold 2 lists, the one at the beginnning contains the number you fixed, the other all cells with 2 candidates. Go through the second list. If it shares both a unit and a number from the first list, fix it to the second number and add it to the first list. If there is none, stop it and save the candidate list. You have to do it max. N^2 times with max. N^4 comparisons. After that go through the saved lists. If there is a cell with a unique candidate in all lists, you have found a forcing chain.

Hi Wolfgang, Would you like to share this method in the thread for Identification of forcing chains
.
Jeff
 
Posts: 708
Joined: 01 August 2005

Postby Wolfgang » Mon Jan 30, 2006 1:11 pm

castalia wrote:Since there are now so many solving techniques, it's difficult to say whether checking for quads would ever be necessary.

My friend had a puzzle some days ago, which she solved by spotting a quintupel (or hidden quad):)

Finding all tuples is certainly easier than finding all forcing chains for 9x9 sudoku. But this is indeed not the case as N becomes larger.


Thats the point. The polynomial/exponential classification is worthful for programmers who work with larger sudokus. It wouldnt make sense then to try to find all n-tupels, because for large N it would take too much time. Looking for forcing chains would be faster then.
But for normal sudokus it can be misguiding as a measure, how hard a technique is.
It is another thing that we dont have something better.

I try to avoid the term T&E, because it has turned out to be a matter of taste, what is meant with it and all those discussions did not make it clearer.
Jeff wrote:Hi Wolfgang, Would you like to share this method in the thread for Identification of forcing chains

I would not recommend it for manual solving:) It was just a quick way to make plausible that it is polynomial. Think Tso, Carcul &Co can better describe how to spot forcing chains.
Wolfgang
 
Posts: 208
Joined: 22 June 2005

Postby castalia » Tue Jan 31, 2006 12:54 am

Wolfgang wrote:My friend had a puzzle some days ago, which she solved by spotting a quintupel (or hidden quad):)


One of the nice aspects of 9x9 sudoku is that it's never necessary to search for subsets any larger than quads (including both naked and hidden, of course). Since these operations have a complexity lower than the graph-theoretic techniques, they're obviously more desirable to attempt first.

This is the reason I used the term "conundrum" earlier when refering to the complexity of finding all subsets. Yes, there are a total of 2^N subsets for any given set of N candidates, but if the search can be restricted to naked/hidden quads, the complexity somehow mysteriously reduces to N^5. A calculation of order 5 is certainly much more efficient than the 7th, 8th, and 9th order calcualtions required by Eppstein's algorithms. So, how is it possible for the exponential technique of subsets to all of a sudden become polynomial?


Wolfgang wrote:The polynomial/exponential classification is worthful for programmers who work with larger sudokus. But for normal sudokus it can be misguiding as a measure, how hard a technique is.


That's a difficult argument to accept. Even for 9x9 sudokus, the measure of acceptable vs. unacceptable techniques for human solvers is the number of operations these techniques require in the general case. And it so far unfailingly turns out that polynomial algorithms are within acceptable bounds, and exponential ones are not.

Mark
castalia
 
Posts: 8
Joined: 22 January 2006

Postby Myth Jellies » Tue Mar 28, 2006 10:09 am

I'm dredging up this old chestnut because I just caught an article in Scientific American (March 2006 pp. 74-81) which is related to the idea of complexity versus useful logical theory. The article entitled The Limits of Reason was written by Gregory Chaitin, the pioneer of algorithmic information theory.

Summarizing the article--Gottfried W. Leibniz made the observation in his 1686 essay, Discourse on Metaphysics, that a theory has to have an quality of simplicity, otherwise it does not explain anything. If you allow theories and laws unlimited complexity, then you can devise a theory to explain any random set of data; but that theory is essentially meaningless.

The algorithmic information theorists have quantified this concept of complexity versus theory. In essence, they state that for a theory to have any teeth, it must be simpler than the data it explains.

So how does this all relate to a sudoku puzzle and our proposed "logical methods"? Well, say you have a theory which allows you to eliminate a particular candidate in a particular cell. For most cases, your "theory" consists of all of the candidates that you needed to make your deduction. On the other hand, your "data" is the number of candidates you need to consider if you happened to randomly assume that the candidate you eliminated was true and then followed things along until the puzzle crashes. If your number of theory candidates is less than your number of data candidates, then you have the makings of a viable theory or logical method.

Take as an example, the naked pair in the following row...
Code: Select all
12  12  13  34  45  56  7  8  9

The naked 12-pair uses precisely four "theory" candidates to eliminate any other 1's and 2's in the row, in this case, the 1 in the third cell. Now if we assume the third cell is a 1, that is one candidate of data. That eliminates the 1's from the first two cells (3 candidates of data so far). That leaves only a 2 in both those cells for a total of 5 data candidates involved in the crash. The theory is smaller than the data, thus making it useful.

What this is saying, in essence, is that a useful theory must provide some "action at a distance." One apparently necessary requirement for a sudoku method to be less complex than the data it explains is for the deduction to apply to a candidate which is not a part of the method/theory/pattern.

An article similar to the one appearing in SA is http://plus.maths.org/issue37/features/omega/

Gregory Chaitin's home page with lots of related links is http://www.umcs.maine.edu/~chaitin/
Myth Jellies
 
Posts: 593
Joined: 19 September 2005

scientific american sudoku article

Postby michel » Thu Jun 08, 2006 12:45 pm

:(I am feeling somewhat dense at this point and I thought I'd ask the experts. Is the Sudoku in the current Scientific American solvable without guessing? I am stuck and see no way of proceeding without guessing which I refuse to do.
michel
 
Posts: 3
Joined: 08 June 2006

Re: scientific american sudoku article

Postby Havard » Thu Jun 08, 2006 12:54 pm

michel wrote::(I am feeling somewhat dense at this point and I thought I'd ask the experts. Is the Sudoku in the current Scientific American solvable without guessing? I am stuck and see no way of proceeding without guessing which I refuse to do.


If you could post the sudoku in question in the "help" section, there are a lot of nice people standing by to help you!:)

Havard
Havard
 
Posts: 378
Joined: 25 December 2005

Re: scientific american sudoku article

Postby ravel » Thu Jun 08, 2006 1:53 pm

michel wrote:Is the Sudoku in the current Scientific American solvable without guessing?

The 17 clue puzzle there (being no member i can only see the solutions, not the article) can be solved with singles and box/line eliminations. The "Solution Methods Example" needs only singles.
As Havard said, you can post the puzzle in the "Help with particular puzzles" forum to get detailed explanations. If it is one of the 2 above, you can copy it from here:
Code: Select all
 +-------+-------+-------+
 | . 1 . | . . . | . . 9 |
 | . . . | 3 . . | 8 . . |
 | . . . | . . . | 6 . . |
 +-------+-------+-------+
 | . . . | . 1 2 | 4 . . |
 | 7 . 3 | . . . | . . . |
 | 5 . . | . . . | . . . |
 +-------+-------+-------+
 | 8 . . | 6 . . | . . . |
 | . . . | . 4 . | . 2 . |
 | . . . | 7 . . | . 5 . |
 +-------+-------+-------+
 +-------+-------+-------+
 | 5 . 1 | . . . | . 9 6 |
 | . . . | . 9 . | . 5 . |
 | . . . | . . 5 | 2 . 7 |
 +-------+-------+-------+
 | 4 9 . | 1 . . | . 7 . |
 | . . . | . . 7 | . . . |
 | 1 3 . | . . . | . 2 . |
 +-------+-------+-------+
 | 3 . 4 | . 5 9 | . . . |
 | . 2 8 | . 7 1 | . 4 . |
 | 7 6 5 | 8 2 . | . . . |
 +-------+-------+-------+
ravel
 
Posts: 998
Joined: 21 February 2006

Postby Pyrrhon » Wed Jun 14, 2006 3:17 pm

The discussion seems to oversee that in many of the techniques the n is constant n=9. We don't like to solve an 100x100 sudoku or so. Only in that case the n could be a computational problem.

So we have not 2^n (with any possible n), but 2^9=512 possibilities per line or box or if you like 81 x 512 = 41472 possibilities at all to make a choice for a hidden subset. A human would never go through all this possibilities but the complexity is a constant in the sudoku case.

By the way it would not be a good algorithm to go through all 3 element subsets of the values 1-9 to find all naked triples. We can look for subsets of cells instead of subsets of values. We need only to consider cells with less then 4 candidates and more then 1 candidate and take the union of these subsets. And we can backtrack if the cardinality of the union of the first two subset is greater 3. The number will in most cases be much smaller but never bigger then with the other algorithm.

I don't think that complexity (in the sense of complexity theory) is a good measure for the quality of sudoku. It is only senseful if we look for the best algorithm to implement any technique. That it is a bad measure is because of the constant 9 here and it is because an human will not go through all possibilities and has a more fuzzy style to spot the correct technique. To look for more fuzzy techniques would be an interesting task in programing human like sudoku solver.

All sudoku solvings are T&E. We look whether a value is possible in a line at least and this can go or not. This is trial and error as well. It is not bad to have T&E to solve a sudoku, the T&E criterion is not correct to distinguist good and bad sudoku. If it would be we wouldn't have any good sudoku. But we have, haven't we?

Pyrrhon
Pyrrhon
 
Posts: 240
Joined: 26 April 2006

Postby RW » Thu Jul 27, 2006 8:48 am

Myth Jellies wrote:I'm dredging up this old chestnut because I just caught an article in Scientific American (March 2006 pp. 74-81)


And I'm dredging it up again because I finally wish to express my view on this issue.

Myth Jellies wrote:The algorithmic information theorists have quantified this concept of complexity versus theory. In essence, they state that for a theory to have any teeth, it must be simpler than the data it explains.


This sounds reasonable and I think it can be applied to Sudokus also.

Myth Jellies wrote:Well, say you have a theory which allows you to eliminate a particular candidate in a particular cell. For most cases, your "theory" consists of all of the candidates that you needed to make your deduction. On the other hand, your "data" is the number of candidates you need to consider if you happened to randomly assume that the candidate you eliminated was true and then followed things along until the puzzle crashes. If your number of theory candidates is less than your number of data candidates, then you have the makings of a viable theory or logical method.

Take as an example, the naked pair in the following row...

Code: Select all
12  12  13  34  45  56  7  8  9


The naked 12-pair uses precisely four "theory" candidates to eliminate any other 1's and 2's in the row, in this case, the 1 in the third cell. Now if we assume the third cell is a 1, that is one candidate of data. That eliminates the 1's from the first two cells (3 candidates of data so far). That leaves only a 2 in both those cells for a total of 5 data candidates involved in the crash. The theory is smaller than the data, thus making it useful.


This is where I don't quite agree with your reasoning. In my view the data we consider with pure T&E is:

Code: Select all
If c3=1 => c2<>1 => c1<>1 => c2=2 <> c1<>2 => contradiction


That's five candidates of data. Now using the pair:

Code: Select all
If c1<>1 => c1=2 => c2<>2 => c2=1 => c3<>1


That's also five candidates of data to show that either c1=1 or c3<>1 (in both cases of course c3<>1). The fifth candidate is of course the '1' that also must be considered in order to be eliminated. However, the theory becomes a "valid theory" when we consider this row:

Code: Select all
12  12  13  134  145  156  17  18  19


Here we can use the initial four candidates to eliminate all digits 1 from the rest of the row, eliminating 7 candidates by considering 11 candidates of data. Eliminating these by T&E would require 35 candidates of data (5 per eliminated candidate).

IMO this is the essential difference between pattern based techniques and error nets. When you make an elimination based on a pattern, it's always because 'if the eliminated candidate is true, then there is a contradiction within the pattern'. But the patterns usually let you eliminate multiple candidates (eliminate ALL candidates 'a' that can see both ends of the discontinuous XY-chain, eliminate ALL candidates 'a' that can see the fin and are in the same row with the rest of the finned swordfish etc.). An error net will only let you eliminate one candidate, the original implication.

To summarize: The T&E solution for a single elimination in a Sudoku is always just as complex as the pattern that eliminates it. If the pattern allows multiple eliminations, then it can be considered a logical method. Of course, in many cases the patterns allow only one elimination (there is only 1 candidate 'a' that can see both ends of the XY-chain), but this doesn't make it less logical, because the theory as such allows multiple eliminations.

This theory also applies to uniqueness techniques, however the contradiction is then 'puzzle would have multiple solutions', not 'puzzle cannot have a valid solution' as the contradiction is for non-uniqueness based techniques.

PS. Even though I do consider error-nets more T&E than techniques based on patterns, I will still keep using them when solving myself.:)

RW
RW
2010 Supporter
 
Posts: 1010
Joined: 16 March 2006

PreviousNext

Return to Advanced solving techniques