My questions and argument for rating a puzzle derives from this and many other examples puzzles.
- Code: Select all
*-----------*
|..9|8.6|.3.|
|684|...|.1.|
|...|1..|...|
|---+---+---|
|.2.|...|98.|
|4..|...|..3|
|.76|...|.4.|
|---+---+---|
|...|..5|...|
|.4.|...|725|
|.3.|7.8|1..|
*-----------*
{The initial states of this puzzle are comprised of hidden-naked singles with some box line interactions.
Reducing the grid to the following.}
- Code: Select all
*-----------------------------------------------------------*
| 7 1 9 | 8 5 6 | 4 3 2 |
| 6 8 4 | 3 7 2 | 5 1 9 |
| 3 5 2 | 1 49 49 | 8 7 6 |
|-------------------+-------------------+-------------------|
| 15 2 3 | 56 146 14 | 9 8 7 |
| 4 9 18 | 2 18 7 | 6 5 3 |
| 58 7 6 | 59 389 39 | 2 4 1 |
|-------------------+-------------------+-------------------|
| 12 6 7 | 4 12 5 | 3 9 8 |
| 189 4 18 | 69 1369 139 | 7 2 5 |
| 29 3 5 | 7 29 8 | 1 6 4 |
*-----------------------------------------------------------*
How can a person actually surmise a rating of any kind?
When a number of moves or sequences of steps can be used to remove the same clue from any particular cell?
Or perhaps a move can even be used to find something that was overlooked by said player?
A hidden single can be seen not only as its self, but instead could be found or noticed through a list of options:
Perhaps pointed out by a pair of locked cells or triples or quads, quintuples. {That is if the person missed the single}. As they both exists simultaneously.
Take the above puzzle after singles are placed:
Simple sudoku labels the next valid move as a “multi-coloring”
Where R4c1, R5C5, R8C3 <>1
It is in fact not the only move viable to make the same eliminations or similar.
There are several. {Not all are included here}
{A smattering of examples}
Disjointed subset: or this move as Remote pairs, and is also considered a skyscraper: {written as an AIC}
(1=8)R5C3 – (1=8)R5C5 - (1=2) R7C5 – (1=2) R7C1 => R4c1, R5C5, R8C3 <>1,
Disjointed subset: this moves seen as an AIC:
(2=9) R9C1 – (2=9) R9C5 – (2=1) R71C – (2=1) R7C5 =(8)5C5=(1) R5C3= (8)R8C3 => R4c1, R5C5, R8C3 <>1,
Kraken:
(1) R8C6 = (8)R8C3 = (1) R5C3
|
(3) R8C6 = (9) R6C6 = (5) R6C4 =(8)R6C1 =(1) R5C3
|
(9) R8C6 = (4) R3C6 = (1) R4C6 =(8)R5C5 =(1) R5C3
=> R4c1, R5C5, R8C3 <>1,
“Zero solution patterns” * my view of coloring & fish pattern reductions {something similar to forcing chain}
{There are also 4 turbo fish patterns based around the conjugated pairs of 1&8 I have not listed}
(1) R8C3=(8)R5C3=(1) R5C5 =(2) R7C15 contradiction. => R4c1, R5C5, R8C3 <>1,
For any grid:
A person with many refined techniques and skill should be able to find all/ or most of the applicable options then “execute” the one with most affects on a grid to either:
A: Gain a plethora of other moves and perhaps a variance in techniques.
B: opened up a shorten path of steps that are increasing in effectiveness but not necessarily ease of use.
So how do we compare any given puzzle to that of what a human player can see on any particular grid, if the skill of any player is not necessarily identical to another and in conjunction with the fact many different techniques can remove identical clues from identical cells, require different skill levels to utilize or master?
My suggestions for a rating system are a two-fold approach. And also addresses some of the issues I point out above.
Starting with a player:
For me difficulty of a puzzle is caused by a bottleneck of events, As the availability of techniques diminishes the more complex a move is forced to become with a direct limit in operands for it to execute upon.
With increased limitations of available techniques a larger range of potential players are less likely to find the next maneuver to continue onwards in a puzzle. This is apparent in the examples seen above. More viability of move spotting equates to more variance in skill acceptable to solving the puzzle.
The amount of applicable memory storage and recollection required for some techniques also can limit the range of players whom can visually ague the information and process these givens to completion.
The limitation of space has an impact on players as well, if a puzzle requires a technique that only accompanies specific cells at a particular stage of solving: then a player has only one option and it is a question of spotting it that becomes the issue. Fewer people will find the technique and more will not locate it at all.
Computer approach to mimic a range of players:
A computer can list all viable moves for each puzzle, at each step (phase) of subsequent sequences {applied techniques}.
The rating of specific techniques should be based on spatial requirements; {mentioned above: recapped} the more memory it takes to retain data to perform the move directly correlates to difficulty in usage and directly limits the number of people that can utilize the technique equals a more difficult technique.
Simplicity Rating: {Solution with fewest steps}
A computer may be programmed to check all possible paths and choose the one with less steps through comparing # of steps.
First a program must find the shortest sequence of steps, once that sequence of steps are found; it has to look at each phase of the process and list out all the identical/similar techniques that can accomplish the same elimination.
The higher the number of different techniques applicable at each phase can indicate more range of people with a variance in skill to solve this phase, but is limited by skill levels applied through direct relation to the skill required for said techniques.
{[(#Technique * (its rating)] + [(#technique) * (its rating)]+ … }
/ #{Different techniques}
Tabulation at each phases added together /# steps = rating of simplicity:
For example:
5 easy and 5 medium (step 1)
3 hard techniques (step 2)
1 very hard (step 3)
1 very hard (step 4)
1 insane (step 5)
5 easy (done: step 6)
[(5 * 1)+ (5 * 2)]/ 10 = 1.5
(3 * 6) / 3 = 6
(1 * 10) / 1 = 10
(1 * 10) / 1 = 10
(1 * 20) / 1 = 20
(5 * 1) / 5 = 1
Total = 48.5 / 6
Simplistic rating: 6 phases averaging 8.08 per
Complexity Rating: would be calculating the above to all possible variations of solutions then averaging that against number of solution variations.
Comprehension Rating: Comparison between the two ratings:
A comprehension rating would find the rating for any solver on average to find a solution, by noting the variances between the two ratings.
For example: 30 ways to solve the above: with 10 phases {average} each phase averaging 4.52
Would be a longer but less complex way of solving the puzzle compared to the simplistic approach. = Not hard on average.
For example: If there is 30 ways to solve the above: with 7 phases {average} with each phase averaging 9.81.
Would indicate that the puzzle is considerably harder at each phase to the average solver; and directly indicates the average solver will have a lot of difficulty solving as the availably of techniques and memory constraints are applied; but it can be simplistically solved occasionally with short easier steps then the average would indicate. (1 in 30 paths at phase one)
For example: 30 ways to solve the above: with 10 phases {average} each phase averaging 9.81.
Would indicate a puzzle with many complex steps and harder then the 7-step example due to extra phases but identically in the description listed for that example.
For example: 10 possible solutions with 8 phases {average} with each phase averaging 9.81
Would indicate a puzzle with few available solution paths to start with and subsequent steps are difficult.
But its limitation of variance in number of solutions choices would indicate a solver has a 1/10 chance of finding that technique used in the “6” phase completion at phase 1, and potentially solve it easier then the average indicates.
Over view:
Simplicity rating: (steps count) phases; (formula) average per phase
Formula:
{[(#Technique * (its rating)] + [(#technique) * (its rating)]+ … }
/ #{Different techniques}
Complexity Rating: # solutions; (phase) average, (formula2) average per phase
Solution counts: shows how many different paths are available at the start of a puzzle.
Phase count: # of phases required on average
Phase average: Degree of complexity at each step required on average. Larger number = harder techniques required at each phase.
Phase formula:
Sum (Phases) / # solutions
Formula2:
Sum (formula) / # solutions
Comprehension Rating:
Number of Solutions Vs. Simplistic:
Higher # of solutions is a decreased chance of finding the simplistic route at phase one.
Lower # of solutions is an increased chance of finding the simplistic route at phase one.
Phase count Vs. Simplistic: # of phases required on average. Can be more or equal to the Simplistic rating count. Indicates how many phases the average person could require solving the puzzle.
Phase average Vs. Simplistic: can be higher or lower then the average of the simplistic route.
Lower # = wider range of techniques applicable at each phase and wider range of people can solve it.
Higher # = fewer techniques can be used and fewer people will solve.
additioal thoughts:
if the simplist rating of a puzzle has more then one shortest path aplicable
then a solutions count should also be included.
then the average of each of the vible grids should be tabulated the same in the same mannerizm as the complexity rating.