m_b_metcalf wrote:Well, with my limited grasp of the subject, I would expect to give a NN a million puzzles and their solutions, ask it work out what the constraints are and then how to solve the puzzles, then ask it to tackle champagne's file of the hardest. We wouldn't know how it had done it, but it would be interseting to know whether it takes milliseconds or millenia!

You can make it easier by giving it the constraints at the start.

It would probably require to be fed with very simple examples first and progressively increasing the difficulty. The hardest list is probably not a good testing set, a more balanced set would probably be more useful.

Even with these simplifying conditions, it would remain an interesting problem. But a fundamental question remains: what is the problem we really want to solve? Do we merely want the NN to find a solution (and shall we be happy if the NN learns a DFS algorithm?) or do we want it to learn resolution rules so that we can understand its resolution paths? It seems you consider only the first case ("we wouldn't know how it had done it").

Methinks we can't expect the final NN to be the implementation of a computable function: puzzle --> solution.

NN usually work for relatively fuzzy maps: input --> output. But what we'd want here is an exact map.

I strongly doubt this would be possible within the computational bounds of this universe.

As a result, it seems to me the final NN could only be (something equivalent to) a general algorithm (DFS, BFS... or some as yet unknown algorithm) or some implementation of sufficiently powerful resolution rules. That too is not what NNs are good at.

If I had to use learning for solving sudoku, I'd choose symbolic learning. Even so, I'm happy my life doesn't depend on it, because it seems terribly difficult.

PS.: homework for NN students: write a NN that learns (something equivalent to) DFS (make additional assumptions as you want).

PPS.: don't take this PS too seriously.