There are a number of explanations for the differences. I show no Turbot fish mostly because I use
Havard's strong link approach rather than searching for Turbot fish. Even then a skyscraper is also a Sashimi X-wing and my solver will pick it up as a finned fish. My solver doesn't distinguish between the other two types (a kite or a turbot fish), but treats them as X-cycles (two strong links). This makes the approach to two strong links equivalent to three or more strong links where we definitely don't have names for all of the possibilities.
The rest of the differences could be because I made a mistake. Another possibility is because of differences in our random puzzle generators. I use suexg to generate random puzzles. I don't know if anyone has tested it to see if it is unbiased (there was lots of discussion about this earlier this year). If your generator is creating puzzles of a specific level, then there could be a bias which would explain some of the differences. Note too that the results are based on only 10000 runs so there will be some variation in the results.
Another big issue is the order of solving techniques.
doduff correctly suggested looking for all possible eliminations first and then performing the eliminations. This is a great idea for counting occurrences of techniques, but would be a major rewrite of my solver. Also I think its more human to make eliminations one at a time. This probably is most obvious in the occurrences of swordfish. Back then they were very high in my solving hierarchy (since then I've moved them down). ALS xz-rule was also pretty high given the complexity of finding an ALS. Sue de Coq is high because I had placed it before ALS xz-rule. Following this technique the occurrences are much lower. It doesn't surprise me that locked candidates and XY-wings are relatively high in occurrence.
I ran an update to the hierarchy a while ago, but never posted it. I'll go ahead and do so now. My results are in no way authoritative. I'd love to see your results as well.