36
36
will sometimes start over and compress the whole list to get tighter
37
37
packing. We get diminishing returns after a while, so this limits the
38
38
number of times we will try.
39
In testing, some values for 100k nodes::
41
w/o copy w/ copy w/ copy & save
42
_max_repack time node count time node count t nc
43
1 8.0s 704 8.8s 494 14.2 390 #
44
2 9.2s 491 9.6s 432 # 12.9 390
45
3 10.6s 430 # 10.8s 408 12.0 390
48
20 17.7s 390 17.8s 390
39
In testing, some values for bzr.dev::
41
w/o copy w/ copy w/ copy ins w/ copy & save
42
repack time MB time MB time MB time MB
43
1 8.8 5.1 8.9 5.1 9.6 4.4 12.5 4.1
44
2 9.6 4.4 10.1 4.3 10.4 4.2 11.1 4.1
45
3 10.6 4.2 11.1 4.1 11.2 4.1 11.3 4.1
48
20 12.9 4.1 12.2 4.1 12.3 4.1
50
In testing, some values for mysql-unpacked::
52
w/o copy w/ copy w/ copy ins w/ copy & save
53
repack time MB time MB time MB time MB
55
2 59.3 14.1 62.6 13.5 64.3 13.4
49
59
:cvar _default_min_compression_size: The expected minimum compression.
50
60
While packing nodes into the page, we won't Z_SYNC_FLUSH until we have
51
61
received this much input data. This saves time, because we don't bloat