Given the recent reparte, this post is a little dry. The saga ends in questions about numerical accuracy. Please bear with my rambling.

Suffering from my usual sophomoric enthusiasm (euphemism for naivity), I tried to write a routine to convert the number of Sudoku solutions expressed as 0x20a7b8c11b3000 * 9! * 2 to a decimal (string). I got close, but stumbled onto/over numerical accuracy issues. This bit of LongMult hackery (compatible with Ed’s T-class ct[] generator usage)

- Code: Select all
`#include <stdio.h>`

#include <math.h>

#define FBASE 1000000000.0

char * longmult(unsigned int ilong[], unsigned int n) {

double fa, fb;

static char prod[30] = "123456789";

printf("*subtot 0x : %07x%07x\n",ilong[1],ilong[0]);

fa = ilong[1];

fa = fa* (2<<27);

fa += ilong[0];

printf("*subtot dbl: %020.0f\n", fa );

fb=floor(fa/FBASE); /* like >> base 10 */

fa=fmod(fa, FBASE); /* like << base 10 */

printf("*subtot fab: %011.0f,%09.0f\n", fb, fa);

fa = fa * n;

fb = fb * n + fa/FBASE;

fa = fmod(fa, FBASE);

printf("*total fab : %.0f,%09.0f, n=%d\n", fb, fa, n);

sprintf(prod, "%.0f%09.0f", fb, fa ); /* format width must match FBASE */

printf("*longmult : %s\n", prod);

return prod;

}

produces, when called as longmult(ct, (unsigned int)362880*2),

with the final ct[] result:

*subtot 0x : 20a7b8c11b3000

*subtot dbl: 00009191611210346496

*subtot fab: 00009191611,210346496

*total fab : 6670903752021,072936960, n=725760

*longmult : 6670903752021072936960

converts 0x20a7b8c11b3000 to 9191611210346496 accurately and for the 9!*2 factor produces the right answer: 6670903752021072936960. But I was just lucky (numerically). While testing/exploring the coding I ran into numerical limits for both

- the ct[] to double conversion (when adding ct[0] to ct[1}<<28) and

- the two segment double multiply

Double addition is (apparently) only accurate to ~15 decimal digits. The 16 digit value for 9191611210346496, just happened to be exact. (For 16 digits, err ~ +/- 1, with ~10%? accurate). The two segment multiply suffers similarly.

The algorithm used is (I believe) algebraically correct, but limited by the numerical accuracy of the platform and software. Looking at the code you can’t tell they will fail. Running the code doesn’t throw exceptions for inaccuracies. You have to find them by handcheck (using alternate tools) or ‘independent verification’. This leaves me with the following questions/requests:

What are the limits of float (double) accuracy? Obviously depends on platform/compiler. What’s the spec Pentium class machines? How does this depend on compiler (options)?

Can someone point me to some trusted BigNum algorithms or C/C++ code for the likes of my LongMult kludge above?

The PatMax/Guenter/Ed programs are coming up with consistent numbers. Since consistent and implemented independently, I trust they’re correct (both in results and implementation). How (numerically) sensitive is this (counting) code to extension to bigger problems? There is obviously years of experience hidden in the code design, not immediately obvious, that can break when extended inappropriately.

P.S. I haven't looked at the Patmax code yet.