champagne wrote:well, one more rating program, why not.
I think you missed the main point, maybe because you didn't leave me time for my next posts (in which I planned to give a few examples).
What I'm proposing is not mainly a new rating program - although I also have a universal rating (the BB rating) satisfying all the properties I mentioned in my introductory post.
But the main point is, I want to introduce a principled, theoretically grounded approach to classification (which is the case of none of those associated with the other ratings I've mentioned).
As for the various classifications I'll introduce (later), of course I have a few programs that compute them.
champagne wrote:I am as many, bored by long path generated by solvers, including mine.
So I am looking for short solutions to hard puzzles that players can find.
I can understand this. I'm also bored by my whip or g-whip resolution paths but I still consider them, in all modesty, as the backbone of the best existing classification for puzzles in T&E(1), the associated rating being occasionally improved by more specific patterns. My approach to the remaining T&E(2) is similar.
AFAIK none of the known hardest puzzles in T&E(2) has a short solution (or we don't have the same definition of "short"); at best, they have (nearly) initial eliminations based on smart exotic patterns.
champagne wrote:In the data base of "potential hardest", to day including more than 30 000 puzzles
. more than 20 000 have an exocet pattern
- about 25 000 have one or several of the following...
Having some special pattern at (or near) the start is an interesting thing; but again AFAIK using such patterns is very far from being enough to get a simple solution of the hardest puzzles.
However, one of the things I can compute is how much using it makes the puzzle simpler (in my classification). See my forthcoming examples from the EasterMonster family.
champagne wrote:what is, seen by your tool, the percentage of puzzles eligible to the last class in that lot
This is one thing I haven't tried yet.
Finally, I would not use the word statistics about any of the existing collections of hard puzzles.
These collections are all strongly biased, for a very simple reason. When a new pattern is discovered in a puzzle P, lots of people produce variants of P and many of these variants end in the collections. Even Eleven's collection (which has been my main source of analysis) is probably biased, because he started from the known hardest; it is probably less biased than other manually assembled collections. I'm also aware of the meta-collection you compiled later (including Eleven's puzzles), but I've only analysed the top of it, for SER 11.9 to 11.6 included.
Contrary to you, my focus is on general classifications and on puzzles that have no special pattern. My way of taking special patterns into account is to see how they (occasionally) change the universal classifications.