Hi Mladen,
After a long break over the year end, I restated the work on that 17's search trying to improve the situation for the "worst cases" without any damage for the average cases. So far, I changed more the last steps, not in the field of your post, but I worked in that direction earlier, so I have questions and comments.
dobrichev wrote:
After playing for some time with my implementation of McGuire's process, I realized that he underestimated the fact that the first cell of the top UA is dead for 3/4 of the time (or 5/6 if the top UA is of size 6).
Choosing the less-significant cell for the top UA to be one of the most populated cells, and doing something similar for the next 4 UA, dropped the execution time by ~25%.
Optimizing the effect of the dead cells in the first steps sounds good. I made many tests in that direction as you, but defining the optimum is not a simple thing.With algorithms I used, I got good results on specific solutions grids, but a reverse effect on other solution grids. IMO, it can not be a major factor, to define the best process.
In the current code, I am focusing on other issues, and I select the UA's cells directly by a "bitscan" in the UA bits field. Choosing the cells to optimize the "dead effect" is not compatible with that process, so I keep that for later.
BTW, I am not 100% sure to read properly that
Choosing the less-significant cell for the top UA to be one of the most populated cells, and doing something similar for the next 4 UA,My understanding is that you reorder the cells of the smallest UAs to process first the cell(s) hitting more UAs (in part of the table or in all the table??). Most of the tests I made were in that direction.
dobrichev wrote:but in my implementation this happens very rare (<<5% of the time is consumed by the solver).
The solver weight in the process is low if it is used exclusively as final check of solutions when all other filtering possibilities have been used, so I would agree on that.
I tried to check in my current process the validity of a bands 1+2 partial solutions. Then, the filter appeared too costly. (subject to...)
dobrichev wrote:
I still believe that a kind of "divide and conquer" approach like in recent Champagne's work can overperform the classical UA hitting. Go Gerard, I am following your progress with interest.
I really believe that the current search will give good results. Worst cases specific to the split in {band 1+2; band3} are there, but the process is very efficient for other solution grids.
I made recent changes in the lay-out of the final steps and likely, results will not come before my departure for several weeks on a sunny beach side (where I'll continue to work). I want to go till the test on a sample of solution grids without 17 after these changes and then only, implement other pending ideas.
So far, the next ideas to test that I have in mind are
- An extension of the solution grids filter in band 3 to "non minimal" solutions.
This can be a key issue if the frequency of solutions in bands 1+2 with less clues than necessary to have the minimum number of clues in band 3 is as high as in my tests on solution grids with known 17s
- optimization of the bands 1+2 selection of UAs
Including new attempts to play with the "dead branches" factor
- and as usual, some code optimization.
Having now a better knowledge of the "intrinsic" functions in C++, I am reconsidering some bit operations. Did you try to use the {bit test; bit test and reset...}. These instructions can surely replace in some cases things done in the128 bit fields. (using for example your bitSet table to set or clear a bit)