-
Committer:
John Arbash Meinel
-
Date:
2009-03-18 22:45:24 UTC
-
mto:
(3735.2.157 brisbane-core)
-
mto:
This revision was merged to the branch mainline in
revision
4280.
-
Revision ID:
john@arbash-meinel.com-20090318224524-ve32it3ddqfzvi6q
Reverted back to the same hash width, and bumped EXTRA_NULLS to 3.
Most entries in a hash bucket are genuinely random, so they don't trigger
extra comparisons. So walking 4-7 nodes is fairly cheap at that level.
My guess is that bumping EXTRA_NULL has a bigger effect when you get the
occassional non-random data, that forces expansion because it gets a
collision.
Data with repetition a multiple of 16 (but not 16) will cause this, as
you can get a large insertion, with lots of dupes.
We filter out when the dupe is exactly a multiple of 16, we may want to
do something similar at larger ranges (or use limit_hash_table on the data
possibly with a much smaller value than 64.)
Most important (next) is to handle the large update case.