PIsaacson wrote:... It looks like the very lowest level routines are called so often that they greatly influence the results ...
OK, thanks. But ... hmm ... now I'm wondering if we've got the right sort of profiling here.
For example, suppose we're optimising a routine to sort a great many big blocks of data (not just pointers to the data). It will do this by repeatedly calling a very expensive CompareAndSwap routine:
- Code: Select all
function Sort():
for i = N downto 2:
for j = 1 to N-1:
CompareAndSwap(block i, block j)
Now, I don't know how your profiler works, but I suppose it could count the timings either like this:
- Code: Select all
100% Main program
100% \-- Swap()
99% \-- CompareAndSwap()
- Code: Select all
0% Main program
1% \-- Swap()
99% \-- CompareAndSwap()
So ... can we either get profiling information in the first form above, or get the proportion of time executing each solving technique some other way - e.g. by wrapping each one like this:
- Code: Select all
function TechniqueX():
timing[TECHNIQUE_X] -= thecurrenttime()
... original Technique X code here ...
... original Technique X code here ...
... original Technique X code here ...
timing[TECHNIQUE_X] += thecurrenttime()