The initial results for _find_gdfo on bzr's ancestry: with 25k revisions, it takes 40.8M steps, a peak todo of 10.6k and 9.7M nodes skipped. Overall, taking 3m55s, which is pretty unusable. Only 25% of the evaluations are 'wasted' (where we see a node from a longer path before we evaluate the node where we thought we should). My guess is that we get lots of 'small waves', where we number a bunch of revisions based on path A, then find out it is reachable from path B, and renumber all of those revisions. Perhaps walking from the largest rather than the smallest would be advantageous, though I have the feeling it just means we get 'long walks', where we walk from the new longest path to the tips of everything underneath it.