~bzr-pqm/bzr/bzr.dev

« back to all changes in this revision

Viewing changes to groupcompress.py

  • Committer: John Arbash Meinel
  • Date: 2009-03-02 22:38:28 UTC
  • mto: (0.17.31 trunk)
  • mto: This revision was merged to the branch mainline in revision 4280.
  • Revision ID: john@arbash-meinel.com-20090302223828-hyb4crn4w28sgvmc
Fix a bug when handling multiple large-range copies.

We were adjusting moff multiple times, without adjusting it back.

Show diffs side-by-side

added added

removed removed

Lines of Context:
167
167
            new_chunks = []
168
168
        else:
169
169
            new_chunks = ['label:%s\nsha1:%s\n' % (label, sha1)]
 
170
        if self._delta_index._source_offset != self.endpoint:
 
171
            raise AssertionError('_source_offset != endpoint'
 
172
                ' somehow the DeltaIndex got out of sync with'
 
173
                ' the output lines')
170
174
        delta = self._delta_index.make_delta(target_text)
171
175
        if (delta is None
172
176
            or len(delta) > len(target_text) / 2):
178
182
                new_chunks.insert(0, 'fulltext\n')
179
183
                new_chunks.append('len:%s\n' % (input_len,))
180
184
            unadded_bytes = sum(map(len, new_chunks))
181
 
            deltas_unadded = (self.endpoint - self._delta_index._source_offset)
182
 
            if deltas_unadded != 0:
183
 
                import pdb; pdb.set_trace()
184
 
            unadded_bytes += deltas_unadded
185
185
            self._delta_index.add_source(target_text, unadded_bytes)
186
186
            new_chunks.append(target_text)
187
187
        else:
190
190
            else:
191
191
                new_chunks.insert(0, 'delta\n')
192
192
                new_chunks.append('len:%s\n' % (len(delta),))
 
193
            # unadded_bytes = sum(map(len, new_chunks))
193
194
            # self._delta_index.add_source(delta, unadded_bytes)
194
195
            new_chunks.append(delta)
195
196
            unadded_bytes = sum(map(len, new_chunks))