~bzr-pqm/bzr/bzr.dev

« back to all changes in this revision

Viewing changes to bzrlib/groupcompress.py

  • Committer: Andrew Bennetts
  • Date: 2009-08-25 07:30:40 UTC
  • mto: This revision was merged to the branch mainline in revision 4657.
  • Revision ID: andrew.bennetts@canonical.com-20090825073040-rhtf9zc0ni3fdko1
Bump up the batch size to 256k, and fix the batch size estimate to use the length of the raw bytes that will be fetched (not the uncompressed bytes).

Show diffs side-by-side

added added

removed removed

Lines of Context:
1001
1001
        # here will be wrong, so we might fetch bigger/smaller batches than
1002
1002
        # intended.
1003
1003
        if read_memo not in self.gcvf._group_cache:
1004
 
            start, end = index_memo[3:5]
1005
 
            self.total_bytes += end - start
 
1004
            byte_length = read_memo[2]
 
1005
            self.total_bytes += byte_length
1006
1006
        
1007
1007
    def empty_manager(self):
1008
1008
        if self.manager is not None:
1432
1432
        # Batch up as many keys as we can until either:
1433
1433
        #  - we encounter an unadded ref, or
1434
1434
        #  - we run out of keys, or
1435
 
        #  - the total bytes to retrieve for this batch > 64k
 
1435
        #  - the total bytes to retrieve for this batch > 256k
1436
1436
        batcher = _BatchingBlockFetcher(self, locations)
1437
 
        BATCH_SIZE = 2**16
 
1437
        BATCH_SIZE = 2**18
1438
1438
        for source, keys in source_keys:
1439
1439
            if source is self:
1440
1440
                for key in keys: