Reverted back to the same hash width, and bumped EXTRA_NULLS to 3. Most entries in a hash bucket are genuinely random, so they don't trigger extra comparisons. So walking 4-7 nodes is fairly cheap at that level. My guess is that bumping EXTRA_NULL has a bigger effect when you get the occassional non-random data, that forces expansion because it gets a collision. Data with repetition a multiple of 16 (but not 16) will cause this, as you can get a large insertion, with lots of dupes. We filter out when the dupe is exactly a multiple of 16, we may want to do something similar at larger ranges (or use limit_hash_table on the data possibly with a much smaller value than 64.) Most important (next) is to handle the large update case.