1
# Copyright (C) 2006, 2007 Canonical Ltd
3
# This program is free software; you can redistribute it and/or modify
4
# it under the terms of the GNU General Public License as published by
5
# the Free Software Foundation; either version 2 of the License, or
6
# (at your option) any later version.
8
# This program is distributed in the hope that it will be useful,
9
# but WITHOUT ANY WARRANTY; without even the implied warranty of
10
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
11
# GNU General Public License for more details.
13
# You should have received a copy of the GNU General Public License
14
# along with this program; if not, write to the Free Software
15
# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
17
"""DirState objects record the state of a directory and its bzr metadata.
19
Pseudo EBNF grammar for the state file. Fields are separated by NULLs, and
20
lines by NL. The field delimiters are ommitted in the grammar, line delimiters
21
are not - this is done for clarity of reading. All string data is in utf8.
23
MINIKIND = "f" | "d" | "l" | "a" | "r";
26
WHOLE_NUMBER = {digit}, digit;
28
REVISION_ID = a non-empty utf8 string;
30
dirstate format = header line, full checksum, row count, parent details,
31
ghost_details, entries;
32
header line = "#bazaar dirstate flat format 2", NL;
33
full checksum = "adler32: ", ["-"], WHOLE_NUMBER, NL;
34
row count = "num_entries: ", digit, NL;
35
parent_details = WHOLE NUMBER, {REVISION_ID}* NL;
36
ghost_details = WHOLE NUMBER, {REVISION_ID}*, NL;
38
entry = entry_key, current_entry_details, {parent_entry_details};
39
entry_key = dirname, basename, fileid;
40
current_entry_details = common_entry_details, working_entry_details;
41
parent_entry_details = common_entry_details, history_entry_details;
42
common_entry_details = MINIKIND, fingerprint, size, executable
43
working_entry_details = packed_stat
44
history_entry_details = REVISION_ID;
47
fingerprint = a nonempty utf8 sequence with meaning defined by minikind.
49
Given this definition, the following is useful to know:
50
entry (aka row) - all the data for a given key.
51
entry[0]: The key (dirname, basename, fileid)
55
entry[1]: The tree(s) data for this path and id combination.
56
entry[1][0]: The current tree
57
entry[1][1]: The second tree
59
For an entry for a tree, we have (using tree 0 - current tree) to demonstrate:
60
entry[1][0][0]: minikind
61
entry[1][0][1]: fingerprint
63
entry[1][0][3]: executable
64
entry[1][0][4]: packed_stat
66
entry[1][1][4]: revision_id
68
There may be multiple rows at the root, one per id present in the root, so the
69
in memory root row is now:
70
self._dirblocks[0] -> ('', [entry ...]),
71
and the entries in there are
74
entries[0][2]: file_id
75
entries[1][0]: The tree data for the current tree for this fileid at /
79
'r' is a relocated entry: This path is not present in this tree with this id,
80
but the id can be found at another location. The fingerprint is used to
81
point to the target location.
82
'a' is an absent entry: In that tree the id is not present at this path.
83
'd' is a directory entry: This path in this tree is a directory with the
84
current file id. There is no fingerprint for directories.
85
'f' is a file entry: As for directory, but its a file. The fingerprint is a
87
'l' is a symlink entry: As for directory, but a symlink. The fingerprint is the
91
--- Format 1 had the following different definition: ---
92
rows = dirname, NULL, basename, NULL, MINIKIND, NULL, fileid_utf8, NULL,
93
WHOLE NUMBER (* size *), NULL, packed stat, NULL, sha1|symlink target,
95
PARENT ROW = NULL, revision_utf8, NULL, MINIKIND, NULL, dirname, NULL,
96
basename, NULL, WHOLE NUMBER (* size *), NULL, "y" | "n", NULL,
99
PARENT ROW's are emitted for every parent that is not in the ghosts details
100
line. That is, if the parents are foo, bar, baz, and the ghosts are bar, then
101
each row will have a PARENT ROW for foo and baz, but not for bar.
104
In any tree, a kind of 'moved' indicates that the fingerprint field
105
(which we treat as opaque data specific to the 'kind' anyway) has the
106
details for the id of this row in that tree.
108
I'm strongly tempted to add a id->path index as well, but I think that
109
where we need id->path mapping; we also usually read the whole file, so
110
I'm going to skip that for the moment, as we have the ability to locate
111
via bisect any path in any tree, and if we lookup things by path, we can
112
accumulate a id->path mapping as we go, which will tend to match what we
115
I plan to implement this asap, so please speak up now to alter/tweak the
116
design - and once we stabilise on this, I'll update the wiki page for
119
The rationale for all this is that we want fast operations for the
120
common case (diff/status/commit/merge on all files) and extremely fast
121
operations for the less common but still occurs a lot status/diff/commit
122
on specific files). Operations on specific files involve a scan for all
123
the children of a path, *in every involved tree*, which the current
124
format did not accommodate.
128
1) Fast end to end use for bzr's top 5 uses cases. (commmit/diff/status/merge/???)
129
2) fall back current object model as needed.
130
3) scale usably to the largest trees known today - say 50K entries. (mozilla
131
is an example of this)
135
Eventually reuse dirstate objects across locks IFF the dirstate file has not
136
been modified, but will require that we flush/ignore cached stat-hit data
137
because we wont want to restat all files on disk just because a lock was
138
acquired, yet we cannot trust the data after the previous lock was released.
140
Memory representation:
141
vector of all directories, and vector of the childen ?
143
root_entrie = (direntry for root, [parent_direntries_for_root]),
145
('', ['data for achild', 'data for bchild', 'data for cchild'])
146
('dir', ['achild', 'cchild', 'echild'])
148
- single bisect to find N subtrees from a path spec
149
- in-order for serialisation - this is 'dirblock' grouping.
150
- insertion of a file '/a' affects only the '/' child-vector, that is, to
151
insert 10K elements from scratch does not generates O(N^2) memoves of a
152
single vector, rather each individual, which tends to be limited to a
153
manageable number. Will scale badly on trees with 10K entries in a
154
single directory. compare with Inventory.InventoryDirectory which has
155
a dictionary for the children. No bisect capability, can only probe for
156
exact matches, or grab all elements and sorta.
157
- Whats the risk of error here? Once we have the base format being processed
158
we should have a net win regardless of optimality. So we are going to
159
go with what seems reasonably.
162
maybe we should do a test profile of these core structure - 10K simulated searches/lookups/etc?
164
Objects for each row?
165
The lifetime of Dirstate objects is current per lock, but see above for
166
possible extensions. The lifetime of a row from a dirstate is expected to be
167
very short in the optimistic case: which we are optimising for. For instance,
168
subtree status will determine from analysis of the disk data what rows need to
169
be examined at all, and will be able to determine from a single row whether
170
that file has altered or not, so we are aiming to process tens of thousands of
171
entries each second within the dirstate context, before exposing anything to
172
the larger codebase. This suggests we want the time for a single file
173
comparison to be < 0.1 milliseconds. That would give us 10000 paths per second
174
processed, and to scale to 100 thousand we'll another order of magnitude to do
175
that. Now, as the lifetime for all unchanged entries is the time to parse, stat
176
the file on disk, and then immediately discard, the overhead of object creation
177
becomes a significant cost.
179
Figures: Creating a tuple from from 3 elements was profiled at 0.0625
180
microseconds, whereas creating a object which is subclassed from tuple was
181
0.500 microseconds, and creating an object with 3 elements and slots was 3
182
microseconds long. 0.1 milliseconds is 100 microseconds, and ideally we'll get
183
down to 10 microseconds for the total processing - having 33% of that be object
184
creation is a huge overhead. There is a potential cost in using tuples within
185
each row which is that the conditional code to do comparisons may be slower
186
than method invocation, but method invocation is known to be slow due to stack
187
frame creation, so avoiding methods in these tight inner loops in unfortunately
188
desirable. We can consider a pyrex version of this with objects in future if
209
from bzrlib.osutils import (
217
class _Bisector(object):
218
"""This just keeps track of information as we are bisecting."""
221
class DirState(object):
222
"""Record directory and metadata state for fast access.
224
A dirstate is a specialised data structure for managing local working
225
tree state information. Its not yet well defined whether it is platform
226
specific, and if it is how we detect/parameterise that.
229
_kind_to_minikind = {'absent':'a', 'file':'f', 'directory':'d', 'relocated':'r', 'symlink':'l'}
230
_minikind_to_kind = {'a':'absent', 'f':'file', 'd':'directory', 'l':'symlink', 'r':'relocated'}
231
_to_yesno = {True:'y', False: 'n'} # TODO profile the performance gain
232
# of using int conversion rather than a dict here. AND BLAME ANDREW IF
235
# TODO: jam 20070221 Make sure we handle when there are duplicated records
236
# (like when we remove + add the same path, or we have a rename)
237
# TODO: jam 20070221 Figure out what to do if we have a record that exceeds
238
# the BISECT_PAGE_SIZE. For now, we just have to make it large enough
239
# that we are sure a single record will always fit.
240
BISECT_PAGE_SIZE = 4096
243
IN_MEMORY_UNMODIFIED = 1
244
IN_MEMORY_MODIFIED = 2
246
# A pack_stat (the x's) that is just noise and will never match the output
249
NULL_PARENT_DETAILS = ('a', '', 0, False, '')
251
HEADER_FORMAT_2 = '#bazaar dirstate flat format 2\n'
252
HEADER_FORMAT_3 = '#bazaar dirstate flat format 3\n'
254
def __init__(self, path):
255
"""Create a DirState object.
259
:attr _root_entrie: The root row of the directory/file information,
260
- contains the path to / - '', ''
261
- kind of 'directory',
262
- the file id of the root in utf8
265
- and no sha information.
266
:param path: The path at which the dirstate file on disk should live.
268
# _header_state and _dirblock_state represent the current state
269
# of the dirstate metadata and the per-row data respectiely.
270
# NOT_IN_MEMORY indicates that no data is in memory
271
# IN_MEMORY_UNMODIFIED indicates that what we have in memory
272
# is the same as is on disk
273
# IN_MEMORY_MODIFIED indicates that we have a modified version
274
# of what is on disk.
275
# In future we will add more granularity, for instance _dirblock_state
276
# will probably support partially-in-memory as a separate variable,
277
# allowing for partially-in-memory unmodified and partially-in-memory
279
self._header_state = DirState.NOT_IN_MEMORY
280
self._dirblock_state = DirState.NOT_IN_MEMORY
284
self._state_file = None
285
self._filename = path
286
self._lock_token = None
287
self._id_index = None
288
self._end_of_header = None
289
self._split_path_cache = {}
290
self._bisect_page_size = DirState.BISECT_PAGE_SIZE
292
def add(self, path, file_id, kind, stat, link_or_sha1):
293
"""Add a path to be tracked.
295
:param path: The path within the dirstate - '' is the root, 'foo' is the
296
path foo within the root, 'foo/bar' is the path bar within foo
298
:param file_id: The file id of the path being added.
299
:param kind: The kind of the path.
300
:param stat: The output of os.lstate for the path.
301
:param link_or_sha1: The sha value of the file, or the target of a
302
symlink. '' for directories.
305
# find the block its in.
306
# find the location in the block.
307
# check its not there
309
#------- copied from inventory.make_entry
310
# --- normalized_filename wants a unicode basename only, so get one.
311
dirname, basename = osutils.split(path)
312
# we dont import normalized_filename directly because we want to be
313
# able to change the implementation at runtime for tests.
314
norm_name, can_access = osutils.normalized_filename(basename)
315
if norm_name != basename:
319
raise errors.InvalidNormalization(path)
320
# now that we've normalised, we need the correct utf8 path and
321
# dirname and basename elements. This single encode and split should be
322
# faster than three separate encodes.
323
utf8path = (dirname + '/' + basename).strip('/').encode('utf8')
324
dirname, basename = osutils.split(utf8path)
325
assert file_id.__class__ == str, \
326
"must be a utf8 file_id not %s" % (type(file_id))
327
# Make sure the file_id does not exist in this tree
328
file_id_entry = self._get_entry(0, fileid_utf8=file_id)
329
if file_id_entry != (None, None):
330
path = osutils.pathjoin(file_id_entry[0][0], file_id_entry[0][1])
331
kind = DirState._minikind_to_kind[file_id_entry[1][0][0]]
332
info = '%s:%s' % (kind, path)
333
raise errors.DuplicateFileId(file_id, info)
334
entry_key = (dirname, basename, file_id)
335
self._read_dirblocks_if_needed()
336
block_index, present = self._find_block_index_from_key(entry_key)
338
# The block where we want to put the file is not present. But it
339
# might be because the directory was empty, or not loaded yet. Look
340
# for a parent entry, if not found, raise NotVersionedError
341
parent_dir, parent_base = osutils.split(dirname)
342
parent_block_idx, parent_entry_idx, _, parent_present = \
343
self._get_block_entry_index(parent_dir, parent_base, 0)
344
if not parent_present:
345
raise errors.NotVersionedError(path, str(self))
346
self._ensure_block(parent_block_idx, parent_entry_idx, dirname)
347
block = self._dirblocks[block_index][1]
350
packed_stat = DirState.NULLSTAT
353
packed_stat = pack_stat(stat)
354
parent_info = self._empty_parent_info()
355
minikind = DirState._kind_to_minikind[kind]
357
entry_data = entry_key, [
358
(minikind, link_or_sha1, size, False, packed_stat),
360
elif kind == 'directory':
361
entry_data = entry_key, [
362
(minikind, '', 0, False, packed_stat),
364
elif kind == 'symlink':
365
entry_data = entry_key, [
366
(minikind, link_or_sha1, size, False, packed_stat),
369
raise errors.BzrError('unknown kind %r' % kind)
370
entry_index, present = self._find_entry_index(entry_key, block)
371
assert not present, "basename %r already added" % basename
372
block.insert(entry_index, entry_data)
374
if kind == 'directory':
375
# insert a new dirblock
376
self._ensure_block(block_index, entry_index, utf8path)
377
self._dirblock_state = DirState.IN_MEMORY_MODIFIED
379
self._id_index.setdefault(entry_key[2], set()).add(entry_key)
381
def _bisect(self, dir_name_list):
382
"""Bisect through the disk structure for specific rows.
384
:param dir_name_list: A list of (dir, name) pairs.
385
:return: A dict mapping (dir, name) => entry for found entries. Missing
386
entries will not be in the map.
388
self._requires_lock()
389
# We need the file pointer to be right after the initial header block
390
self._read_header_if_needed()
391
# If _dirblock_state was in memory, we should just return info from
392
# there, this function is only meant to handle when we want to read
394
assert self._dirblock_state == DirState.NOT_IN_MEMORY
396
# The disk representation is generally info + '\0\n\0' at the end. But
397
# for bisecting, it is easier to treat this as '\0' + info + '\0\n'
398
# Because it means we can sync on the '\n'
399
state_file = self._state_file
400
file_size = os.fstat(state_file.fileno()).st_size
401
# We end up with 2 extra fields, we should have a trailing '\n' to
402
# ensure that we read the whole record, and we should have a precursur
403
# '' which ensures that we start after the previous '\n'
404
entry_field_count = self._fields_per_entry() + 1
406
low = self._end_of_header
407
high = file_size - 1 # Ignore the final '\0'
408
# Map from (dir, name) => entry
411
# Avoid infinite seeking
412
max_count = 30*len(dir_name_list)
414
# pending is a list of places to look.
415
# each entry is a tuple of low, high, dir_names
416
# low -> the first byte offset to read (inclusive)
417
# high -> the last byte offset (inclusive)
418
# dir_names -> The list of (dir, name) pairs that should be found in
419
# the [low, high] range
420
pending = [(low, high, dir_name_list)]
422
page_size = self._bisect_page_size
424
fields_to_entry = self._get_fields_to_entry()
427
low, high, cur_files = pending.pop()
429
if not cur_files or low >= high:
434
if count > max_count:
435
raise errors.BzrError('Too many seeks, most likely a bug.')
437
mid = max(low, (low+high-page_size)/2)
440
# limit the read size, so we don't end up reading data that we have
442
read_size = min(page_size, (high-mid)+1)
443
block = state_file.read(read_size)
446
entries = block.split('\n')
449
# We didn't find a '\n', so we cannot have found any records.
450
# So put this range back and try again. But we know we have to
451
# increase the page size, because a single read did not contain
452
# a record break (so records must be larger than page_size)
454
pending.append((low, high, cur_files))
457
# Check the first and last entries, in case they are partial, or if
458
# we don't care about the rest of this page
460
first_fields = entries[0].split('\0')
461
if len(first_fields) < entry_field_count:
462
# We didn't get the complete first entry
463
# so move start, and grab the next, which
464
# should be a full entry
465
start += len(entries[0])+1
466
first_fields = entries[1].split('\0')
469
if len(first_fields) <= 2:
470
# We didn't even get a filename here... what do we do?
471
# Try a large page size and repeat this query
473
pending.append((low, high, cur_files))
476
# Find what entries we are looking for, which occur before and
477
# after this first record.
479
first_dir_name = (first_fields[1], first_fields[2])
480
first_loc = bisect.bisect_left(cur_files, first_dir_name)
482
# These exist before the current location
483
pre = cur_files[:first_loc]
484
# These occur after the current location, which may be in the
485
# data we read, or might be after the last entry
486
post = cur_files[first_loc:]
488
if post and len(first_fields) >= entry_field_count:
489
# We have files after the first entry
491
# Parse the last entry
492
last_entry_num = len(entries)-1
493
last_fields = entries[last_entry_num].split('\0')
494
if len(last_fields) < entry_field_count:
495
# The very last hunk was not complete,
496
# read the previous hunk
497
after = mid + len(block) - len(entries[-1])
499
last_fields = entries[last_entry_num].split('\0')
501
after = mid + len(block)
503
last_dir_name = (last_fields[1], last_fields[2])
504
last_loc = bisect.bisect_right(post, last_dir_name)
506
middle_files = post[:last_loc]
507
post = post[last_loc:]
510
# We have files that should occur in this block
511
# (>= first, <= last)
512
# Either we will find them here, or we can mark them as
515
if middle_files[0] == first_dir_name:
516
# We might need to go before this location
517
pre.append(first_dir_name)
518
if middle_files[-1] == last_dir_name:
519
post.insert(0, last_dir_name)
521
# Find out what paths we have
522
paths = {first_dir_name:[first_fields]}
523
# last_dir_name might == first_dir_name so we need to be
524
# careful if we should append rather than overwrite
525
if last_entry_num != first_entry_num:
526
paths.setdefault(last_dir_name, []).append(last_fields)
527
for num in xrange(first_entry_num+1, last_entry_num):
528
# TODO: jam 20070223 We are already splitting here, so
529
# shouldn't we just split the whole thing rather
530
# than doing the split again in add_one_record?
531
fields = entries[num].split('\0')
532
dir_name = (fields[1], fields[2])
533
paths.setdefault(dir_name, []).append(fields)
535
for dir_name in middle_files:
536
for fields in paths.get(dir_name, []):
537
# offset by 1 because of the opening '\0'
538
# consider changing fields_to_entry to avoid the
540
entry = fields_to_entry(fields[1:])
541
found.setdefault(dir_name, []).append(entry)
543
# Now we have split up everything into pre, middle, and post, and
544
# we have handled everything that fell in 'middle'.
545
# We add 'post' first, so that we prefer to seek towards the
546
# beginning, so that we will tend to go as early as we need, and
547
# then only seek forward after that.
549
pending.append((after, high, post))
551
pending.append((low, start-1, pre))
553
# Consider that we may want to return the directory entries in sorted
554
# order. For now, we just return them in whatever order we found them,
555
# and leave it up to the caller if they care if it is ordered or not.
558
def _bisect_dirblocks(self, dir_list):
559
"""Bisect through the disk structure to find entries in given dirs.
561
_bisect_dirblocks is meant to find the contents of directories, which
562
differs from _bisect, which only finds individual entries.
564
:param dir_list: An sorted list of directory names ['', 'dir', 'foo'].
565
:return: A map from dir => entries_for_dir
567
# TODO: jam 20070223 A lot of the bisecting logic could be shared
568
# between this and _bisect. It would require parameterizing the
569
# inner loop with a function, though. We should evaluate the
570
# performance difference.
571
self._requires_lock()
572
# We need the file pointer to be right after the initial header block
573
self._read_header_if_needed()
574
# If _dirblock_state was in memory, we should just return info from
575
# there, this function is only meant to handle when we want to read
577
assert self._dirblock_state == DirState.NOT_IN_MEMORY
579
# The disk representation is generally info + '\0\n\0' at the end. But
580
# for bisecting, it is easier to treat this as '\0' + info + '\0\n'
581
# Because it means we can sync on the '\n'
582
state_file = self._state_file
583
file_size = os.fstat(state_file.fileno()).st_size
584
# We end up with 2 extra fields, we should have a trailing '\n' to
585
# ensure that we read the whole record, and we should have a precursur
586
# '' which ensures that we start after the previous '\n'
587
entry_field_count = self._fields_per_entry() + 1
589
low = self._end_of_header
590
high = file_size - 1 # Ignore the final '\0'
591
# Map from dir => entry
594
# Avoid infinite seeking
595
max_count = 30*len(dir_list)
597
# pending is a list of places to look.
598
# each entry is a tuple of low, high, dir_names
599
# low -> the first byte offset to read (inclusive)
600
# high -> the last byte offset (inclusive)
601
# dirs -> The list of directories that should be found in
602
# the [low, high] range
603
pending = [(low, high, dir_list)]
605
page_size = self._bisect_page_size
607
fields_to_entry = self._get_fields_to_entry()
610
low, high, cur_dirs = pending.pop()
612
if not cur_dirs or low >= high:
617
if count > max_count:
618
raise errors.BzrError('Too many seeks, most likely a bug.')
620
mid = max(low, (low+high-page_size)/2)
623
# limit the read size, so we don't end up reading data that we have
625
read_size = min(page_size, (high-mid)+1)
626
block = state_file.read(read_size)
629
entries = block.split('\n')
632
# We didn't find a '\n', so we cannot have found any records.
633
# So put this range back and try again. But we know we have to
634
# increase the page size, because a single read did not contain
635
# a record break (so records must be larger than page_size)
637
pending.append((low, high, cur_dirs))
640
# Check the first and last entries, in case they are partial, or if
641
# we don't care about the rest of this page
643
first_fields = entries[0].split('\0')
644
if len(first_fields) < entry_field_count:
645
# We didn't get the complete first entry
646
# so move start, and grab the next, which
647
# should be a full entry
648
start += len(entries[0])+1
649
first_fields = entries[1].split('\0')
652
if len(first_fields) <= 1:
653
# We didn't even get a dirname here... what do we do?
654
# Try a large page size and repeat this query
656
pending.append((low, high, cur_dirs))
659
# Find what entries we are looking for, which occur before and
660
# after this first record.
662
first_dir = first_fields[1]
663
first_loc = bisect.bisect_left(cur_dirs, first_dir)
665
# These exist before the current location
666
pre = cur_dirs[:first_loc]
667
# These occur after the current location, which may be in the
668
# data we read, or might be after the last entry
669
post = cur_dirs[first_loc:]
671
if post and len(first_fields) >= entry_field_count:
672
# We have records to look at after the first entry
674
# Parse the last entry
675
last_entry_num = len(entries)-1
676
last_fields = entries[last_entry_num].split('\0')
677
if len(last_fields) < entry_field_count:
678
# The very last hunk was not complete,
679
# read the previous hunk
680
after = mid + len(block) - len(entries[-1])
682
last_fields = entries[last_entry_num].split('\0')
684
after = mid + len(block)
686
last_dir = last_fields[1]
687
last_loc = bisect.bisect_right(post, last_dir)
689
middle_files = post[:last_loc]
690
post = post[last_loc:]
693
# We have files that should occur in this block
694
# (>= first, <= last)
695
# Either we will find them here, or we can mark them as
698
if middle_files[0] == first_dir:
699
# We might need to go before this location
700
pre.append(first_dir)
701
if middle_files[-1] == last_dir:
702
post.insert(0, last_dir)
704
# Find out what paths we have
705
paths = {first_dir:[first_fields]}
706
# last_dir might == first_dir so we need to be
707
# careful if we should append rather than overwrite
708
if last_entry_num != first_entry_num:
709
paths.setdefault(last_dir, []).append(last_fields)
710
for num in xrange(first_entry_num+1, last_entry_num):
711
# TODO: jam 20070223 We are already splitting here, so
712
# shouldn't we just split the whole thing rather
713
# than doing the split again in add_one_record?
714
fields = entries[num].split('\0')
715
paths.setdefault(fields[1], []).append(fields)
717
for cur_dir in middle_files:
718
for fields in paths.get(cur_dir, []):
719
# offset by 1 because of the opening '\0'
720
# consider changing fields_to_entry to avoid the
722
entry = fields_to_entry(fields[1:])
723
found.setdefault(cur_dir, []).append(entry)
725
# Now we have split up everything into pre, middle, and post, and
726
# we have handled everything that fell in 'middle'.
727
# We add 'post' first, so that we prefer to seek towards the
728
# beginning, so that we will tend to go as early as we need, and
729
# then only seek forward after that.
731
pending.append((after, high, post))
733
pending.append((low, start-1, pre))
737
def _bisect_recursive(self, dir_name_list):
738
"""Bisect for entries for all paths and their children.
740
This will use bisect to find all records for the supplied paths. It
741
will then continue to bisect for any records which are marked as
742
directories. (and renames?)
744
:param paths: A sorted list of (dir, name) pairs
745
eg: [('', 'a'), ('', 'f'), ('a/b', 'c')]
746
:return: A dictionary mapping (dir, name, file_id) => [tree_info]
748
# Map from (dir, name, file_id) => [tree_info]
751
found_dir_names = set()
753
# Directories that have been read
754
processed_dirs = set()
755
# Get the ball rolling with the first bisect for all entries.
756
newly_found = self._bisect(dir_name_list)
759
# Directories that need to be read
761
paths_to_search = set()
762
for entry_list in newly_found.itervalues():
763
for dir_name_id, trees_info in entry_list:
764
found[dir_name_id] = trees_info
765
found_dir_names.add(dir_name_id[:2])
767
for tree_info in trees_info:
768
minikind = tree_info[0]
771
# We already processed this one as a directory,
772
# we don't need to do the extra work again.
774
subdir, name, file_id = dir_name_id
775
path = osutils.pathjoin(subdir, name)
777
if path not in processed_dirs:
778
pending_dirs.add(path)
779
elif minikind == 'r':
780
# Rename, we need to directly search the target
781
# which is contained in the fingerprint column
782
dir_name = osutils.split(tree_info[1])
783
if dir_name[0] in pending_dirs:
784
# This entry will be found in the dir search
786
# TODO: We need to check if this entry has
787
# already been found. Otherwise we might be
788
# hitting infinite recursion.
789
if dir_name not in found_dir_names:
790
paths_to_search.add(dir_name)
791
# Now we have a list of paths to look for directly, and
792
# directory blocks that need to be read.
793
# newly_found is mixing the keys between (dir, name) and path
794
# entries, but that is okay, because we only really care about the
796
newly_found = self._bisect(sorted(paths_to_search))
797
newly_found.update(self._bisect_dirblocks(sorted(pending_dirs)))
798
processed_dirs.update(pending_dirs)
801
def _empty_parent_info(self):
802
return [DirState.NULL_PARENT_DETAILS] * (len(self._parents) -
805
def _ensure_block(self, parent_block_index, parent_row_index, dirname):
806
"""Ensure a block for dirname exists.
808
This function exists to let callers which know that there is a
809
directory dirname ensure that the block for it exists. This block can
810
fail to exist because of demand loading, or because a directory had no
811
children. In either case it is not an error. It is however an error to
812
call this if there is no parent entry for the directory, and thus the
813
function requires the coordinates of such an entry to be provided.
815
The root row is special cased and can be indicated with a parent block
818
:param parent_block_index: The index of the block in which dirname's row
820
:param parent_row_index: The index in the parent block where the row
822
:param dirname: The utf8 dirname to ensure there is a block for.
823
:return: The index for the block.
825
if dirname == '' and parent_row_index == 0 and parent_block_index == 0:
826
# This is the signature of the root row, and the
827
# contents-of-root row is always index 1
829
# the basename of the directory must be the end of its full name.
830
if not (parent_block_index == -1 and
831
parent_block_index == -1 and dirname == ''):
832
assert dirname.endswith(
833
self._dirblocks[parent_block_index][1][parent_row_index][0][1])
834
block_index, present = self._find_block_index_from_key((dirname, '', ''))
836
## In future, when doing partial parsing, this should load and
837
# populate the entire block.
838
self._dirblocks.insert(block_index, (dirname, []))
841
def _entries_to_current_state(self, new_entries):
842
"""Load new_entries into self.dirblocks.
844
Process new_entries into the current state object, making them the active
847
:param new_entries: A sorted list of entries. This function does not sort
848
to prevent unneeded overhead when callers have a sorted list already.
851
assert new_entries[0][0][0:2] == ('', ''), \
852
"Missing root row %r" % (new_entries[0][0],)
853
# The two blocks here are deliberate: the root block and the
854
# contents-of-root block.
855
self._dirblocks = [('', []), ('', [])]
856
current_block = self._dirblocks[0][1]
859
append_entry = current_block.append
860
for entry in new_entries:
861
if entry[0][0] != current_dirname:
862
# new block - different dirname
864
current_dirname = entry[0][0]
865
self._dirblocks.append((current_dirname, current_block))
866
append_entry = current_block.append
867
# append the entry to the current block
869
self._split_root_dirblock_into_contents()
871
def _split_root_dirblock_into_contents(self):
872
"""Split the root dirblocks into root and contents-of-root.
874
After parsing by path, we end up with root entries and contents-of-root
875
entries in the same block. This loop splits them out again.
877
# The above loop leaves the "root block" entries mixed with the
878
# "contents-of-root block". But we don't want an if check on
879
# all entries, so instead we just fix it up here.
880
assert self._dirblocks[1] == ('', [])
882
contents_of_root_block = []
883
for entry in self._dirblocks[0][1]:
884
if not entry[0][1]: # This is a root entry
885
root_block.append(entry)
887
contents_of_root_block.append(entry)
888
self._dirblocks[0] = ('', root_block)
889
self._dirblocks[1] = ('', contents_of_root_block)
891
def _entry_to_line(self, entry):
892
"""Serialize entry to a NULL delimited line ready for _get_output_lines.
894
:param entry: An entry_tuple as defined in the module docstring.
896
entire_entry = list(entry[0])
897
for tree_number, tree_data in enumerate(entry[1]):
898
# (minikind, fingerprint, size, executable, tree_specific_string)
899
entire_entry.extend(tree_data)
900
# 3 for the key, 5 for the fields per tree.
901
tree_offset = 3 + tree_number * 5
903
entire_entry[tree_offset + 0] = tree_data[0]
905
entire_entry[tree_offset + 2] = str(tree_data[2])
907
entire_entry[tree_offset + 3] = DirState._to_yesno[tree_data[3]]
908
return '\0'.join(entire_entry)
910
def _fields_per_entry(self):
911
"""How many null separated fields should be in each entry row.
913
Each line now has an extra '\n' field which is not used
914
so we just skip over it
917
+ number of fields per tree_data (5) * tree count
920
tree_count = 1 + self._num_present_parents()
921
return 3 + 5 * tree_count + 1
923
def _find_block(self, key, add_if_missing=False):
924
"""Return the block that key should be present in.
926
:param key: A dirstate entry key.
927
:return: The block tuple.
929
block_index, present = self._find_block_index_from_key(key)
931
if not add_if_missing:
932
# check to see if key is versioned itself - we might want to
933
# add it anyway, because dirs with no entries dont get a
934
# dirblock at parse time.
935
# This is an uncommon branch to take: most dirs have children,
936
# and most code works with versioned paths.
937
parent_base, parent_name = osutils.split(key[0])
938
if not self._get_block_entry_index(parent_base, parent_name, 0)[3]:
939
# some parent path has not been added - its an error to add
941
raise errors.NotVersionedError(key[0:2], str(self))
942
self._dirblocks.insert(block_index, (key[0], []))
943
return self._dirblocks[block_index]
945
def _find_block_index_from_key(self, key):
946
"""Find the dirblock index for a key.
948
:return: The block index, True if the block for the key is present.
950
if key[0:2] == ('', ''):
952
block_index = bisect_dirblock(self._dirblocks, key[0], 1,
953
cache=self._split_path_cache)
954
# _right returns one-past-where-key is so we have to subtract
955
# one to use it. we use _right here because there are two
956
# '' blocks - the root, and the contents of root
957
# we always have a minimum of 2 in self._dirblocks: root and
958
# root-contents, and for '', we get 2 back, so this is
959
# simple and correct:
960
present = (block_index < len(self._dirblocks) and
961
self._dirblocks[block_index][0] == key[0])
962
return block_index, present
964
def _find_entry_index(self, key, block):
965
"""Find the entry index for a key in a block.
967
:return: The entry index, True if the entry for the key is present.
969
entry_index = bisect.bisect_left(block, (key, []))
970
present = (entry_index < len(block) and
971
block[entry_index][0] == key)
972
return entry_index, present
975
def from_tree(tree, dir_state_filename):
976
"""Create a dirstate from a bzr Tree.
978
:param tree: The tree which should provide parent information and
980
:return: a DirState object which is currently locked for writing.
981
(it was locked by DirState.initialize)
983
result = DirState.initialize(dir_state_filename)
987
parent_ids = tree.get_parent_ids()
988
num_parents = len(parent_ids)
990
for parent_id in parent_ids:
991
parent_tree = tree.branch.repository.revision_tree(parent_id)
992
parent_trees.append((parent_id, parent_tree))
993
parent_tree.lock_read()
994
result.set_parent_trees(parent_trees, [])
995
result.set_state_from_inventory(tree.inventory)
997
for revid, parent_tree in parent_trees:
1001
# The caller won't have a chance to unlock this, so make sure we
1007
def get_ghosts(self):
1008
"""Return a list of the parent tree revision ids that are ghosts."""
1009
self._read_header_if_needed()
1012
def get_lines(self):
1013
"""Serialise the entire dirstate to a sequence of lines."""
1014
if (self._header_state == DirState.IN_MEMORY_UNMODIFIED and
1015
self._dirblock_state == DirState.IN_MEMORY_UNMODIFIED):
1016
# read whats on disk.
1017
self._state_file.seek(0)
1018
return self._state_file.readlines()
1020
lines.append(self._get_parents_line(self.get_parent_ids()))
1021
lines.append(self._get_ghosts_line(self._ghosts))
1022
# append the root line which is special cased
1023
lines.extend(map(self._entry_to_line, self._iter_entries()))
1024
return self._get_output_lines(lines)
1026
def _get_ghosts_line(self, ghost_ids):
1027
"""Create a line for the state file for ghost information."""
1028
return '\0'.join([str(len(ghost_ids))] + ghost_ids)
1030
def _get_parents_line(self, parent_ids):
1031
"""Create a line for the state file for parents information."""
1032
return '\0'.join([str(len(parent_ids))] + parent_ids)
1034
def _get_fields_to_entry(self):
1035
"""Get a function which converts entry fields into a entry record.
1037
This handles size and executable, as well as parent records.
1039
:return: A function which takes a list of fields, and returns an
1040
appropriate record for storing in memory.
1042
# This is intentionally unrolled for performance
1043
num_present_parents = self._num_present_parents()
1044
if num_present_parents == 0:
1045
def fields_to_entry_0_parents(fields, _int=int):
1046
path_name_file_id_key = (fields[0], fields[1], fields[2])
1047
return (path_name_file_id_key, [
1049
fields[3], # minikind
1050
fields[4], # fingerprint
1051
_int(fields[5]), # size
1052
fields[6] == 'y', # executable
1053
fields[7], # packed_stat or revision_id
1055
return fields_to_entry_0_parents
1056
elif num_present_parents == 1:
1057
def fields_to_entry_1_parent(fields, _int=int):
1058
path_name_file_id_key = (fields[0], fields[1], fields[2])
1059
return (path_name_file_id_key, [
1061
fields[3], # minikind
1062
fields[4], # fingerprint
1063
_int(fields[5]), # size
1064
fields[6] == 'y', # executable
1065
fields[7], # packed_stat or revision_id
1068
fields[8], # minikind
1069
fields[9], # fingerprint
1070
_int(fields[10]), # size
1071
fields[11] == 'y', # executable
1072
fields[12], # packed_stat or revision_id
1075
return fields_to_entry_1_parent
1076
elif num_present_parents == 2:
1077
def fields_to_entry_2_parents(fields, _int=int):
1078
path_name_file_id_key = (fields[0], fields[1], fields[2])
1079
return (path_name_file_id_key, [
1081
fields[3], # minikind
1082
fields[4], # fingerprint
1083
_int(fields[5]), # size
1084
fields[6] == 'y', # executable
1085
fields[7], # packed_stat or revision_id
1088
fields[8], # minikind
1089
fields[9], # fingerprint
1090
_int(fields[10]), # size
1091
fields[11] == 'y', # executable
1092
fields[12], # packed_stat or revision_id
1095
fields[13], # minikind
1096
fields[14], # fingerprint
1097
_int(fields[15]), # size
1098
fields[16] == 'y', # executable
1099
fields[17], # packed_stat or revision_id
1102
return fields_to_entry_2_parents
1104
def fields_to_entry_n_parents(fields, _int=int):
1105
path_name_file_id_key = (fields[0], fields[1], fields[2])
1106
trees = [(fields[cur], # minikind
1107
fields[cur+1], # fingerprint
1108
_int(fields[cur+2]), # size
1109
fields[cur+3] == 'y', # executable
1110
fields[cur+4], # stat or revision_id
1111
) for cur in xrange(3, len(fields)-1, 5)]
1112
return path_name_file_id_key, trees
1113
return fields_to_entry_n_parents
1115
def get_parent_ids(self):
1116
"""Return a list of the parent tree ids for the directory state."""
1117
self._read_header_if_needed()
1118
return list(self._parents)
1120
def _get_block_entry_index(self, dirname, basename, tree_index):
1121
"""Get the coordinates for a path in the state structure.
1123
:param dirname: The utf8 dirname to lookup.
1124
:param basename: The utf8 basename to lookup.
1125
:param tree_index: The index of the tree for which this lookup should
1127
:return: A tuple describing where the path is located, or should be
1128
inserted. The tuple contains four fields: the block index, the row
1129
index, anda two booleans are True when the directory is present, and
1130
when the entire path is present. There is no guarantee that either
1131
coordinate is currently reachable unless the found field for it is
1132
True. For instance, a directory not present in the searched tree
1133
may be returned with a value one greater than the current highest
1134
block offset. The directory present field will always be True when
1135
the path present field is True. The directory present field does
1136
NOT indicate that the directory is present in the searched tree,
1137
rather it indicates that there are at least some files in some
1140
self._read_dirblocks_if_needed()
1141
key = dirname, basename, ''
1142
block_index, present = self._find_block_index_from_key(key)
1144
# no such directory - return the dir index and 0 for the row.
1145
return block_index, 0, False, False
1146
block = self._dirblocks[block_index][1] # access the entries only
1147
entry_index, present = self._find_entry_index(key, block)
1148
# linear search through present entries at this path to find the one
1150
while entry_index < len(block) and block[entry_index][0][1] == basename:
1151
if block[entry_index][1][tree_index][0] not in \
1152
('a', 'r'): # absent, relocated
1153
return block_index, entry_index, True, True
1155
return block_index, entry_index, True, False
1157
def _get_entry(self, tree_index, fileid_utf8=None, path_utf8=None):
1158
"""Get the dirstate entry for path in tree tree_index
1160
If either file_id or path is supplied, it is used as the key to lookup.
1161
If both are supplied, the fastest lookup is used, and an error is
1162
raised if they do not both point at the same row.
1164
:param tree_index: The index of the tree we wish to locate this path
1165
in. If the path is present in that tree, the entry containing its
1166
details is returned, otherwise (None, None) is returned
1167
:param fileid_utf8: A utf8 file_id to look up.
1168
:param path_utf8: An utf8 path to be looked up.
1169
:return: The dirstate entry tuple for path, or (None, None)
1171
self._read_dirblocks_if_needed()
1172
if path_utf8 is not None:
1173
assert path_utf8.__class__ == str, 'path_utf8 is not a str: %s %s' % (type(path_utf8), path_utf8)
1174
# path lookups are faster
1175
dirname, basename = osutils.split(path_utf8)
1176
block_index, entry_index, dir_present, file_present = \
1177
self._get_block_entry_index(dirname, basename, tree_index)
1178
if not file_present:
1180
entry = self._dirblocks[block_index][1][entry_index]
1181
assert entry[0][2] and entry[1][tree_index][0] not in ('a', 'r'), 'unversioned entry?!?!'
1183
if entry[0][2] != fileid_utf8:
1184
raise errors.BzrError('integrity error ? : mismatching'
1185
' tree_index, file_id and path')
1188
assert fileid_utf8 is not None
1189
possible_keys = self._get_id_index().get(fileid_utf8, None)
1190
if not possible_keys:
1192
for key in possible_keys:
1193
(block_index, entry_index, dir_present,
1194
file_present) = self._get_block_entry_index(key[0], key[1],
1197
entry = self._dirblocks[block_index][1][entry_index]
1198
# _get_block_entry_index only returns entries that are not
1199
# absent in the current tree. _get_id_index will return
1200
# both locations for a renamed file. It is possible that a
1201
# new file was added at the same location that the old file
1202
# was renamed away. So _get_block_entry_index will actually
1203
# match the new file, skipping the fact that the real entry
1204
# we want is the rename. By just continuing here, we should
1205
# find the record at the target location, because
1206
# _get_id_index should return all locations.
1207
if entry[0][2] != fileid_utf8:
1209
assert entry[1][tree_index][0] not in ('a', 'r')
1210
assert key == entry[0], ('We were told that %s would be at'
1211
' %s, %s, but we found %s' % (key, block_index,
1212
entry_index, entry))
1217
def initialize(path):
1218
"""Create a new dirstate on path.
1220
The new dirstate will be an empty tree - that is it has no parents,
1221
and only a root node - which has id ROOT_ID.
1223
The object will be write locked when returned to the caller,
1224
unless there was an exception in the writing, in which case it
1227
:param path: The name of the file for the dirstate.
1228
:return: A DirState object.
1230
# This constructs a new DirState object on a path, sets the _state_file
1231
# to a new empty file for that path. It then calls _set_data() with our
1232
# stock empty dirstate information - a root with ROOT_ID, no children,
1233
# and no parents. Finally it calls save() to ensure that this data will
1235
result = DirState(path)
1236
# root dir and root dir contents with no children.
1237
empty_tree_dirblocks = [('', []), ('', [])]
1238
# a new root directory, with a NULLSTAT.
1239
empty_tree_dirblocks[0][1].append(
1240
(('', '', inventory.ROOT_ID), [
1241
('d', '', 0, False, DirState.NULLSTAT),
1245
result._set_data([], empty_tree_dirblocks)
1252
def _inv_entry_to_details(self, inv_entry):
1253
"""Convert an inventory entry (from a revision tree) to state details.
1255
:param inv_entry: An inventory entry whose sha1 and link targets can be
1256
relied upon, and which has a revision set.
1257
:return: A details tuple - the details for a single tree at a path +
1260
kind = inv_entry.kind
1261
minikind = DirState._kind_to_minikind[kind]
1262
tree_data = inv_entry.revision
1263
assert len(tree_data) > 0, 'empty revision for the inv_entry.'
1264
if kind == 'directory':
1268
elif kind == 'symlink':
1269
fingerprint = inv_entry.symlink_target or ''
1272
elif kind == 'file':
1273
fingerprint = inv_entry.text_sha1 or ''
1274
size = inv_entry.text_size or 0
1275
executable = inv_entry.executable
1278
return (minikind, fingerprint, size, executable, tree_data)
1280
def _iter_entries(self):
1281
"""Iterate over all the entries in the dirstate.
1283
Each yelt item is an entry in the standard format described in the
1284
docstring of bzrlib.dirstate.
1286
self._read_dirblocks_if_needed()
1287
for directory in self._dirblocks:
1288
for entry in directory[1]:
1291
def _get_id_index(self):
1292
"""Get an id index of self._dirblocks."""
1293
if self._id_index is None:
1295
for key, tree_details in self._iter_entries():
1296
id_index.setdefault(key[2], set()).add(key)
1297
self._id_index = id_index
1298
return self._id_index
1300
def _get_output_lines(self, lines):
1301
"""format lines for final output.
1303
:param lines: A sequece of lines containing the parents list and the
1306
output_lines = [DirState.HEADER_FORMAT_3]
1307
lines.append('') # a final newline
1308
inventory_text = '\0\n\0'.join(lines)
1309
output_lines.append('adler32: %s\n' % (zlib.adler32(inventory_text),))
1310
# -3, 1 for num parents, 1 for ghosts, 1 for final newline
1311
num_entries = len(lines)-3
1312
output_lines.append('num_entries: %s\n' % (num_entries,))
1313
output_lines.append(inventory_text)
1316
def _make_deleted_row(self, fileid_utf8, parents):
1317
"""Return a deleted for for fileid_utf8."""
1318
return ('/', 'RECYCLED.BIN', 'file', fileid_utf8, 0, DirState.NULLSTAT,
1321
def _num_present_parents(self):
1322
"""The number of parent entries in each record row."""
1323
return len(self._parents) - len(self._ghosts)
1327
"""Construct a DirState on the file at path path.
1329
:return: An unlocked DirState object, associated with the given path.
1331
result = DirState(path)
1334
def _read_dirblocks_if_needed(self):
1335
"""Read in all the dirblocks from the file if they are not in memory.
1337
This populates self._dirblocks, and sets self._dirblock_state to
1338
IN_MEMORY_UNMODIFIED. It is not currently ready for incremental block
1341
self._read_header_if_needed()
1342
if self._dirblock_state == DirState.NOT_IN_MEMORY:
1343
# move the _state_file pointer to after the header (in case bisect
1344
# has been called in the mean time)
1345
self._state_file.seek(self._end_of_header)
1346
text = self._state_file.read()
1347
# TODO: check the adler checksums. adler_measured = zlib.adler32(text)
1349
fields = text.split('\0')
1350
# Remove the last blank entry
1351
trailing = fields.pop()
1352
assert trailing == ''
1353
# consider turning fields into a tuple.
1355
# skip the first field which is the trailing null from the header.
1357
# Each line now has an extra '\n' field which is not used
1358
# so we just skip over it
1360
# 3 fields for the key
1361
# + number of fields per tree_data (5) * tree count
1363
num_present_parents = self._num_present_parents()
1364
tree_count = 1 + num_present_parents
1365
entry_size = self._fields_per_entry()
1366
expected_field_count = entry_size * self._num_entries
1367
if len(fields) - cur > expected_field_count:
1368
fields = fields[:expected_field_count + cur]
1369
trace.mutter('Unexpectedly long dirstate field count!')
1370
print "XXX: incorrectly truncated dirstate file bug triggered."
1371
field_count = len(fields)
1372
# this checks our adjustment, and also catches file too short.
1373
assert field_count - cur == expected_field_count, \
1374
'field count incorrect %s != %s, entry_size=%s, '\
1375
'num_entries=%s fields=%r' % (
1376
field_count - cur, expected_field_count, entry_size,
1377
self._num_entries, fields)
1379
if num_present_parents == 1:
1380
# Bind external functions to local names
1382
# We access all fields in order, so we can just iterate over
1383
# them. Grab an straight iterator over the fields. (We use an
1384
# iterator because we don't want to do a lot of additions, nor
1385
# do we want to do a lot of slicing)
1386
next = iter(fields).next
1387
# Move the iterator to the current position
1388
for x in xrange(cur):
1390
# The two blocks here are deliberate: the root block and the
1391
# contents-of-root block.
1392
self._dirblocks = [('', []), ('', [])]
1393
current_block = self._dirblocks[0][1]
1394
current_dirname = ''
1395
append_entry = current_block.append
1396
for count in xrange(self._num_entries):
1400
if dirname != current_dirname:
1401
# new block - different dirname
1403
current_dirname = dirname
1404
self._dirblocks.append((current_dirname, current_block))
1405
append_entry = current_block.append
1406
# we know current_dirname == dirname, so re-use it to avoid
1407
# creating new strings
1408
entry = ((current_dirname, name, file_id),
1411
next(), # fingerprint
1412
_int(next()), # size
1413
next() == 'y', # executable
1414
next(), # packed_stat or revision_id
1418
next(), # fingerprint
1419
_int(next()), # size
1420
next() == 'y', # executable
1421
next(), # packed_stat or revision_id
1425
assert trailing == '\n'
1426
# append the entry to the current block
1428
self._split_root_dirblock_into_contents()
1430
fields_to_entry = self._get_fields_to_entry()
1431
entries = [fields_to_entry(fields[pos:pos+entry_size])
1432
for pos in xrange(cur, field_count, entry_size)]
1433
self._entries_to_current_state(entries)
1434
# To convert from format 2 => format 3
1435
# self._dirblocks = sorted(self._dirblocks,
1436
# key=lambda blk:blk[0].split('/'))
1437
# To convert from format 3 => format 2
1438
# self._dirblocks = sorted(self._dirblocks)
1439
self._dirblock_state = DirState.IN_MEMORY_UNMODIFIED
1441
def _read_header(self):
1442
"""This reads in the metadata header, and the parent ids.
1444
After reading in, the file should be positioned at the null
1445
just before the start of the first record in the file.
1447
:return: (expected adler checksum, number of entries, parent list)
1449
self._read_prelude()
1450
parent_line = self._state_file.readline()
1451
info = parent_line.split('\0')
1452
num_parents = int(info[0])
1453
assert num_parents == len(info)-2, 'incorrect parent info line'
1454
self._parents = info[1:-1]
1456
ghost_line = self._state_file.readline()
1457
info = ghost_line.split('\0')
1458
num_ghosts = int(info[1])
1459
assert num_ghosts == len(info)-3, 'incorrect ghost info line'
1460
self._ghosts = info[2:-1]
1461
self._header_state = DirState.IN_MEMORY_UNMODIFIED
1462
self._end_of_header = self._state_file.tell()
1464
def _read_header_if_needed(self):
1465
"""Read the header of the dirstate file if needed."""
1466
# inline this as it will be called a lot
1467
if not self._lock_token:
1468
raise errors.ObjectNotLocked(self)
1469
if self._header_state == DirState.NOT_IN_MEMORY:
1472
def _read_prelude(self):
1473
"""Read in the prelude header of the dirstate file
1475
This only reads in the stuff that is not connected to the adler
1476
checksum. The position will be correct to read in the rest of
1477
the file and check the checksum after this point.
1478
The next entry in the file should be the number of parents,
1479
and their ids. Followed by a newline.
1481
header = self._state_file.readline()
1482
assert header == DirState.HEADER_FORMAT_3, \
1483
'invalid header line: %r' % (header,)
1484
adler_line = self._state_file.readline()
1485
assert adler_line.startswith('adler32: '), 'missing adler32 checksum'
1486
self.adler_expected = int(adler_line[len('adler32: '):-1])
1487
num_entries_line = self._state_file.readline()
1488
assert num_entries_line.startswith('num_entries: '), 'missing num_entries line'
1489
self._num_entries = int(num_entries_line[len('num_entries: '):-1])
1492
"""Save any pending changes created during this session.
1494
We reuse the existing file, because that prevents race conditions with
1495
file creation, and we expect to be using oslocks on it in the near
1496
future to prevent concurrent modification and reads - because dirstates
1497
incremental data aggretation is not compatible with reading a modified
1498
file, and replacing a file in use by another process is impossible on
1501
A dirstate in read only mode should be smart enough though to validate
1502
that the file has not changed, and otherwise discard its cache and
1503
start over, to allow for fine grained read lock duration, so 'status'
1504
wont block 'commit' - for example.
1506
if (self._header_state == DirState.IN_MEMORY_MODIFIED or
1507
self._dirblock_state == DirState.IN_MEMORY_MODIFIED):
1508
self._state_file.seek(0)
1509
self._state_file.writelines(self.get_lines())
1510
self._state_file.truncate()
1511
self._state_file.flush()
1512
self._header_state = DirState.IN_MEMORY_UNMODIFIED
1513
self._dirblock_state = DirState.IN_MEMORY_UNMODIFIED
1515
def _set_data(self, parent_ids, dirblocks):
1516
"""Set the full dirstate data in memory.
1518
This is an internal function used to completely replace the objects
1519
in memory state. It puts the dirstate into state 'full-dirty'.
1521
:param parent_ids: A list of parent tree revision ids.
1522
:param dirblocks: A list containing one tuple for each directory in the
1523
tree. Each tuple contains the directory path and a list of entries
1524
found in that directory.
1526
# our memory copy is now authoritative.
1527
self._dirblocks = dirblocks
1528
self._header_state = DirState.IN_MEMORY_MODIFIED
1529
self._dirblock_state = DirState.IN_MEMORY_MODIFIED
1530
self._parents = list(parent_ids)
1531
self._id_index = None
1533
def set_path_id(self, path, new_id):
1534
"""Change the id of path to new_id in the current working tree.
1536
:param path: The path inside the tree to set - '' is the root, 'foo'
1537
is the path foo in the root.
1538
:param new_id: The new id to assign to the path. This must be a utf8
1539
file id (not unicode, and not None).
1541
# TODO: start warning here.
1542
assert new_id.__class__ == str
1543
self._read_dirblocks_if_needed()
1545
import pdb;pdb.set_trace()
1547
raise NotImplementedError(self.set_path_id)
1548
# TODO: check new id is unique
1549
entry = self._get_entry(0, path_utf8=path)
1550
if entry[0][2] == new_id:
1551
# Nothing to change.
1553
# mark the old path absent, and insert a new root path
1554
self._make_absent(entry)
1555
self.update_minimal(('', '', new_id), 'd',
1556
path_utf8='', packed_stat=entry[1][0][4])
1557
self._dirblock_state = DirState.IN_MEMORY_MODIFIED
1558
if self._id_index is not None:
1559
self._id_index.setdefault(new_id, set()).add(entry[0])
1561
def set_parent_trees(self, trees, ghosts):
1562
"""Set the parent trees for the dirstate.
1564
:param trees: A list of revision_id, tree tuples. tree must be provided
1565
even if the revision_id refers to a ghost: supply an empty tree in
1567
:param ghosts: A list of the revision_ids that are ghosts at the time
1570
# TODO: generate a list of parent indexes to preserve to save
1571
# processing specific parent trees. In the common case one tree will
1572
# be preserved - the left most parent.
1573
# TODO: if the parent tree is a dirstate, we might want to walk them
1574
# all by path in parallel for 'optimal' common-case performance.
1575
# generate new root row.
1576
self._read_dirblocks_if_needed()
1577
# TODO future sketch: Examine the existing parents to generate a change
1578
# map and then walk the new parent trees only, mapping them into the
1579
# dirstate. Walk the dirstate at the same time to remove unreferenced
1582
# sketch: loop over all entries in the dirstate, cherry picking
1583
# entries from the parent trees, if they are not ghost trees.
1584
# after we finish walking the dirstate, all entries not in the dirstate
1585
# are deletes, so we want to append them to the end as per the design
1586
# discussions. So do a set difference on ids with the parents to
1587
# get deletes, and add them to the end.
1588
# During the update process we need to answer the following questions:
1589
# - find other keys containing a fileid in order to create cross-path
1590
# links. We dont't trivially use the inventory from other trees
1591
# because this leads to either double touching, or to accessing
1593
# - find other keys containing a path
1594
# We accumulate each entry via this dictionary, including the root
1597
# we could do parallel iterators, but because file id data may be
1598
# scattered throughout, we dont save on index overhead: we have to look
1599
# at everything anyway. We can probably save cycles by reusing parent
1600
# data and doing an incremental update when adding an additional
1601
# parent, but for now the common cases are adding a new parent (merge),
1602
# and replacing completely (commit), and commit is more common: so
1603
# optimise merge later.
1605
# ---- start generation of full tree mapping data
1606
# what trees should we use?
1607
parent_trees = [tree for rev_id, tree in trees if rev_id not in ghosts]
1608
# how many trees do we end up with
1609
parent_count = len(parent_trees)
1611
# one: the current tree
1612
for entry in self._iter_entries():
1613
# skip entries not in the current tree
1614
if entry[1][0][0] in ('a', 'r'): # absent, relocated
1616
by_path[entry[0]] = [entry[1][0]] + \
1617
[DirState.NULL_PARENT_DETAILS] * parent_count
1618
id_index[entry[0][2]] = set([entry[0]])
1620
# now the parent trees:
1621
for tree_index, tree in enumerate(parent_trees):
1622
# the index is off by one, adjust it.
1623
tree_index = tree_index + 1
1624
# when we add new locations for a fileid we need these ranges for
1625
# any fileid in this tree as we set the by_path[id] to:
1626
# already_processed_tree_details + new_details + new_location_suffix
1627
# the suffix is from tree_index+1:parent_count+1.
1628
new_location_suffix = [DirState.NULL_PARENT_DETAILS] * (parent_count - tree_index)
1629
# now stitch in all the entries from this tree
1630
for path, entry in tree.inventory.iter_entries_by_dir():
1631
# here we process each trees details for each item in the tree.
1632
# we first update any existing entries for the id at other paths,
1633
# then we either create or update the entry for the id at the
1634
# right path, and finally we add (if needed) a mapping from
1635
# file_id to this path. We do it in this order to allow us to
1636
# avoid checking all known paths for the id when generating a
1637
# new entry at this path: by adding the id->path mapping last,
1638
# all the mappings are valid and have correct relocation
1639
# records where needed.
1640
file_id = entry.file_id
1641
path_utf8 = path.encode('utf8')
1642
dirname, basename = osutils.split(path_utf8)
1643
new_entry_key = (dirname, basename, file_id)
1644
# tree index consistency: All other paths for this id in this tree
1645
# index must point to the correct path.
1646
for entry_key in id_index.setdefault(file_id, set()):
1647
# TODO:PROFILING: It might be faster to just update
1648
# rather than checking if we need to, and then overwrite
1649
# the one we are located at.
1650
if entry_key != new_entry_key:
1651
# this file id is at a different path in one of the
1652
# other trees, so put absent pointers there
1653
# This is the vertical axis in the matrix, all pointing
1655
by_path[entry_key][tree_index] = ('r', path_utf8, 0, False, '')
1656
# by path consistency: Insert into an existing path record (trivial), or
1657
# add a new one with relocation pointers for the other tree indexes.
1658
if new_entry_key in id_index[file_id]:
1659
# there is already an entry where this data belongs, just insert it.
1660
by_path[new_entry_key][tree_index] = \
1661
self._inv_entry_to_details(entry)
1663
# add relocated entries to the horizontal axis - this row
1664
# mapping from path,id. We need to look up the correct path
1665
# for the indexes from 0 to tree_index -1
1667
for lookup_index in xrange(tree_index):
1668
# boundary case: this is the first occurence of file_id
1669
# so there are no id_indexs, possibly take this out of
1671
if not len(id_index[file_id]):
1672
new_details.append(DirState.NULL_PARENT_DETAILS)
1674
# grab any one entry, use it to find the right path.
1675
# TODO: optimise this to reduce memory use in highly
1676
# fragmented situations by reusing the relocation
1678
a_key = iter(id_index[file_id]).next()
1679
if by_path[a_key][lookup_index][0] in ('r', 'a'):
1680
# its a pointer or missing statement, use it as is.
1681
new_details.append(by_path[a_key][lookup_index])
1683
# we have the right key, make a pointer to it.
1684
real_path = ('/'.join(a_key[0:2])).strip('/')
1685
new_details.append(('r', real_path, 0, False, ''))
1686
new_details.append(self._inv_entry_to_details(entry))
1687
new_details.extend(new_location_suffix)
1688
by_path[new_entry_key] = new_details
1689
id_index[file_id].add(new_entry_key)
1690
# --- end generation of full tree mappings
1692
# sort and output all the entries
1693
new_entries = sorted(by_path.items(),
1694
key=lambda entry:(entry[0][0].split('/'), entry[0][1]))
1695
self._entries_to_current_state(new_entries)
1696
self._parents = [rev_id for rev_id, tree in trees]
1697
self._ghosts = list(ghosts)
1698
self._header_state = DirState.IN_MEMORY_MODIFIED
1699
self._dirblock_state = DirState.IN_MEMORY_MODIFIED
1700
self._id_index = id_index
1702
def set_state_from_inventory(self, new_inv):
1703
"""Set new_inv as the current state.
1705
This API is called by tree transform, and will usually occur with
1706
existing parent trees.
1708
:param new_inv: The inventory object to set current state from.
1710
self._read_dirblocks_if_needed()
1712
# incremental algorithm:
1713
# two iterators: current data and new data, both in dirblock order.
1714
new_iterator = new_inv.iter_entries_by_dir()
1715
# we will be modifying the dirstate, so we need a stable iterator. In
1716
# future we might write one, for now we just clone the state into a
1717
# list - which is a shallow copy, so each
1718
old_iterator = iter(list(self._iter_entries()))
1719
# both must have roots so this is safe:
1720
current_new = new_iterator.next()
1721
current_old = old_iterator.next()
1722
def advance(iterator):
1724
return iterator.next()
1725
except StopIteration:
1727
while current_new or current_old:
1728
# skip entries in old that are not really there
1729
if current_old and current_old[1][0][0] in ('r', 'a'):
1730
# relocated or absent
1731
current_old = advance(old_iterator)
1734
# convert new into dirblock style
1735
new_path_utf8 = current_new[0].encode('utf8')
1736
new_dirname, new_basename = osutils.split(new_path_utf8)
1737
new_id = current_new[1].file_id
1738
new_entry_key = (new_dirname, new_basename, new_id)
1739
current_new_minikind = \
1740
DirState._kind_to_minikind[current_new[1].kind]
1742
# for safety disable variables
1743
new_path_utf8 = new_dirname = new_basename = new_id = new_entry_key = None
1744
# 5 cases, we dont have a value that is strictly greater than everything, so
1745
# we make both end conditions explicit
1747
# old is finished: insert current_new into the state.
1748
self.update_minimal(new_entry_key, current_new_minikind,
1749
executable=current_new[1].executable,
1750
path_utf8=new_path_utf8)
1751
current_new = advance(new_iterator)
1752
elif not current_new:
1754
self._make_absent(current_old)
1755
current_old = advance(old_iterator)
1756
elif new_entry_key == current_old[0]:
1757
# same - common case
1758
# TODO: update the record if anything significant has changed.
1759
# the minimal required trigger is if the execute bit or cached
1761
if (current_old[1][0][3] != current_new[1].executable or
1762
current_old[1][0][0] != current_new_minikind):
1763
self.update_minimal(current_old[0], current_new_minikind,
1764
executable=current_new[1].executable,
1765
path_utf8=new_path_utf8)
1766
# both sides are dealt with, move on
1767
current_old = advance(old_iterator)
1768
current_new = advance(new_iterator)
1769
elif new_entry_key < current_old[0]:
1771
# add a entry for this and advance new
1772
self.update_minimal(new_entry_key, current_new_minikind,
1773
executable=current_new[1].executable,
1774
path_utf8=new_path_utf8)
1775
current_new = advance(new_iterator)
1778
self._make_absent(current_old)
1779
current_old = advance(old_iterator)
1780
self._dirblock_state = DirState.IN_MEMORY_MODIFIED
1781
self._id_index = None
1783
def _make_absent(self, current_old):
1784
"""Mark current_old - an entry - as absent for tree 0.
1786
:return: True if this was the last details entry for they entry key:
1787
that is, if the underlying block has had the entry removed, thus
1788
shrinking in length.
1790
# build up paths that this id will be left at after the change is made,
1791
# so we can update their cross references in tree 0
1792
all_remaining_keys = set()
1793
# Dont check the working tree, because its going.
1794
for details in current_old[1][1:]:
1795
if details[0] not in ('a', 'r'): # absent, relocated
1796
all_remaining_keys.add(current_old[0])
1797
elif details[0] == 'r': # relocated
1798
# record the key for the real path.
1799
all_remaining_keys.add(tuple(osutils.split(details[1])) + (current_old[0][2],))
1800
# absent rows are not present at any path.
1801
last_reference = current_old[0] not in all_remaining_keys
1803
# the current row consists entire of the current item (being marked
1804
# absent), and relocated or absent entries for the other trees:
1805
# Remove it, its meaningless.
1806
block = self._find_block(current_old[0])
1807
entry_index, present = self._find_entry_index(current_old[0], block[1])
1808
assert present, 'could not find entry for %s' % (current_old,)
1809
block[1].pop(entry_index)
1810
# if we have an id_index in use, remove this key from it for this id.
1811
if self._id_index is not None:
1812
self._id_index[current_old[0][2]].remove(current_old[0])
1813
# update all remaining keys for this id to record it as absent. The
1814
# existing details may either be the record we are making as deleted
1815
# (if there were other trees with the id present at this path), or may
1817
for update_key in all_remaining_keys:
1818
update_block_index, present = \
1819
self._find_block_index_from_key(update_key)
1820
assert present, 'could not find block for %s' % (update_key,)
1821
update_entry_index, present = \
1822
self._find_entry_index(update_key, self._dirblocks[update_block_index][1])
1823
assert present, 'could not find entry for %s' % (update_key,)
1824
update_tree_details = self._dirblocks[update_block_index][1][update_entry_index][1]
1825
# it must not be absent at the moment
1826
assert update_tree_details[0][0] != 'a' # absent
1827
update_tree_details[0] = DirState.NULL_PARENT_DETAILS
1828
self._dirblock_state = DirState.IN_MEMORY_MODIFIED
1829
return last_reference
1831
def update_minimal(self, key, minikind, executable=False, fingerprint='',
1832
packed_stat=None, size=0, path_utf8=None):
1833
"""Update an entry to the state in tree 0.
1835
This will either create a new entry at 'key' or update an existing one.
1836
It also makes sure that any other records which might mention this are
1839
:param key: (dir, name, file_id) for the new entry
1840
:param minikind: The type for the entry ('f' == 'file', 'd' ==
1842
:param executable: Should the executable bit be set?
1843
:param fingerprint: Simple fingerprint for new entry.
1844
:param packed_stat: packed stat value for new entry.
1845
:param size: Size information for new entry
1846
:param path_utf8: key[0] + '/' + key[1], just passed in to avoid doing
1849
block = self._find_block(key)[1]
1850
if packed_stat is None:
1851
packed_stat = DirState.NULLSTAT
1852
entry_index, present = self._find_entry_index(key, block)
1853
new_details = (minikind, fingerprint, size, executable, packed_stat)
1854
id_index = self._get_id_index()
1856
# new entry, synthesis cross reference here,
1857
existing_keys = id_index.setdefault(key[2], set())
1858
if not existing_keys:
1859
# not currently in the state, simplest case
1860
new_entry = key, [new_details] + self._empty_parent_info()
1862
# present at one or more existing other paths.
1863
# grab one of them and use it to generate parent
1864
# relocation/absent entries.
1865
new_entry = key, [new_details]
1866
for other_key in existing_keys:
1867
# change the record at other to be a pointer to this new
1868
# record. The loop looks similar to the change to
1869
# relocations when updating an existing record but its not:
1870
# the test for existing kinds is different: this can be
1871
# factored out to a helper though.
1872
other_block_index, present = self._find_block_index_from_key(other_key)
1874
import pdb; pdb.set_trace()
1875
assert present, 'could not find block for %s' % (other_key,)
1876
other_entry_index, present = self._find_entry_index(other_key,
1877
self._dirblocks[other_block_index][1])
1879
import pdb; pdb.set_trace()
1880
assert present, 'could not find entry for %s' % (other_key,)
1881
assert path_utf8 is not None
1882
self._dirblocks[other_block_index][1][other_entry_index][1][0] = \
1883
('r', path_utf8, 0, False, '')
1885
num_present_parents = self._num_present_parents()
1886
for lookup_index in xrange(1, num_present_parents + 1):
1887
# grab any one entry, use it to find the right path.
1888
# TODO: optimise this to reduce memory use in highly
1889
# fragmented situations by reusing the relocation
1891
update_block_index, present = \
1892
self._find_block_index_from_key(other_key)
1893
assert present, 'could not find block for %s' % (other_key,)
1894
update_entry_index, present = \
1895
self._find_entry_index(other_key, self._dirblocks[update_block_index][1])
1896
assert present, 'could not find entry for %s' % (other_key,)
1897
update_details = self._dirblocks[update_block_index][1][update_entry_index][1][lookup_index]
1898
if update_details[0] in ('r', 'a'): # relocated, absent
1899
# its a pointer or absent in lookup_index's tree, use
1901
new_entry[1].append(update_details)
1903
# we have the right key, make a pointer to it.
1904
pointer_path = osutils.pathjoin(*other_key[0:2])
1905
new_entry[1].append(('r', pointer_path, 0, False, ''))
1906
block.insert(entry_index, new_entry)
1907
existing_keys.add(key)
1909
# Does the new state matter?
1910
block[entry_index][1][0] = new_details
1911
# parents cannot be affected by what we do.
1912
# other occurences of this id can be found
1913
# from the id index.
1915
# tree index consistency: All other paths for this id in this tree
1916
# index must point to the correct path. We have to loop here because
1917
# we may have passed entries in the state with this file id already
1918
# that were absent - where parent entries are - and they need to be
1919
# converted to relocated.
1920
assert path_utf8 is not None
1921
for entry_key in id_index.setdefault(key[2], set()):
1922
# TODO:PROFILING: It might be faster to just update
1923
# rather than checking if we need to, and then overwrite
1924
# the one we are located at.
1925
if entry_key != key:
1926
# this file id is at a different path in one of the
1927
# other trees, so put absent pointers there
1928
# This is the vertical axis in the matrix, all pointing
1930
block_index, present = self._find_block_index_from_key(entry_key)
1932
entry_index, present = self._find_entry_index(entry_key, self._dirblocks[block_index][1])
1934
self._dirblocks[block_index][1][entry_index][1][0] = \
1935
('r', path_utf8, 0, False, '')
1936
# add a containing dirblock if needed.
1937
if new_details[0] == 'd':
1938
subdir_key = (osutils.pathjoin(*key[0:2]), '', '')
1939
block_index, present = self._find_block_index_from_key(subdir_key)
1941
self._dirblocks.insert(block_index, (subdir_key[0], []))
1943
self._dirblock_state = DirState.IN_MEMORY_MODIFIED
1946
def _wipe_state(self):
1947
"""Forget all state information about the dirstate."""
1948
self._header_state = DirState.NOT_IN_MEMORY
1949
self._dirblock_state = DirState.NOT_IN_MEMORY
1952
self._dirblocks = []
1954
def lock_read(self):
1955
"""Acquire a read lock on the dirstate"""
1956
if self._lock_token is not None:
1957
raise errors.LockContention(self._lock_token)
1958
self._lock_token = lock.ReadLock(self._filename)
1959
self._state_file = self._lock_token.f
1962
def lock_write(self):
1963
"""Acquire a write lock on the dirstate"""
1964
if self._lock_token is not None:
1965
raise errors.LockContention(self._lock_token)
1966
self._lock_token = lock.WriteLock(self._filename)
1967
self._state_file = self._lock_token.f
1971
"""Drop any locks held on the dirstate"""
1972
if self._lock_token is None:
1973
raise errors.LockNotHeld(self)
1974
self._state_file = None
1975
self._lock_token.unlock()
1976
self._lock_token = None
1977
self._split_path_cache = {}
1979
def _requires_lock(self):
1980
"""Checks that a lock is currently held by someone on the dirstate"""
1981
if not self._lock_token:
1982
raise errors.ObjectNotLocked(self)
1985
def bisect_dirblock(dirblocks, dirname, lo=0, hi=None, cache={}):
1986
"""Return the index where to insert dirname into the dirblocks.
1988
The return value idx is such that all directories blocks in dirblock[:idx]
1989
have names < dirname, and all blocks in dirblock[idx:] have names >=
1992
Optional args lo (default 0) and hi (default len(dirblocks)) bound the
1993
slice of a to be searched.
1998
dirname_split = cache[dirname]
2000
dirname_split = dirname.split('/')
2001
cache[dirname] = dirname_split
2004
# Grab the dirname for the current dirblock
2005
cur = dirblocks[mid][0]
2007
cur_split = cache[cur]
2009
cur_split = cur.split('/')
2010
cache[cur] = cur_split
2011
if cur_split < dirname_split: lo = mid+1
2016
def pack_stat(st, _encode=base64.encodestring, _pack=struct.pack):
2017
"""Convert stat values into a packed representation."""
2018
# jam 20060614 it isn't really worth removing more entries if we
2019
# are going to leave it in packed form.
2020
# With only st_mtime and st_mode filesize is 5.5M and read time is 275ms
2021
# With all entries filesize is 5.9M and read time is mabye 280ms
2022
# well within the noise margin
2024
# base64.encode always adds a final newline, so strip it off
2025
return _encode(_pack('>llllll'
2026
, st.st_size, st.st_mtime, st.st_ctime
2027
, st.st_dev, st.st_ino, st.st_mode))[:-1]