1
# Copyright (C) 2006, 2007 Canonical Ltd
3
# This program is free software; you can redistribute it and/or modify
4
# it under the terms of the GNU General Public License as published by
5
# the Free Software Foundation; either version 2 of the License, or
6
# (at your option) any later version.
8
# This program is distributed in the hope that it will be useful,
9
# but WITHOUT ANY WARRANTY; without even the implied warranty of
10
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
11
# GNU General Public License for more details.
13
# You should have received a copy of the GNU General Public License
14
# along with this program; if not, write to the Free Software
15
# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
17
"""DirState objects record the state of a directory and its bzr metadata.
19
Pseudo EBNF grammar for the state file. Fields are separated by NULLs, and
20
lines by NL. The field delimiters are ommitted in the grammar, line delimiters
21
are not - this is done for clarity of reading. All string data is in utf8.
23
MINIKIND = "f" | "d" | "l" | "a" | "r" | "t";
26
WHOLE_NUMBER = {digit}, digit;
28
REVISION_ID = a non-empty utf8 string;
30
dirstate format = header line, full checksum, row count, parent details,
31
ghost_details, entries;
32
header line = "#bazaar dirstate flat format 2", NL;
33
full checksum = "crc32: ", ["-"], WHOLE_NUMBER, NL;
34
row count = "num_entries: ", digit, NL;
35
parent_details = WHOLE NUMBER, {REVISION_ID}* NL;
36
ghost_details = WHOLE NUMBER, {REVISION_ID}*, NL;
38
entry = entry_key, current_entry_details, {parent_entry_details};
39
entry_key = dirname, basename, fileid;
40
current_entry_details = common_entry_details, working_entry_details;
41
parent_entry_details = common_entry_details, history_entry_details;
42
common_entry_details = MINIKIND, fingerprint, size, executable
43
working_entry_details = packed_stat
44
history_entry_details = REVISION_ID;
47
fingerprint = a nonempty utf8 sequence with meaning defined by minikind.
49
Given this definition, the following is useful to know:
50
entry (aka row) - all the data for a given key.
51
entry[0]: The key (dirname, basename, fileid)
55
entry[1]: The tree(s) data for this path and id combination.
56
entry[1][0]: The current tree
57
entry[1][1]: The second tree
59
For an entry for a tree, we have (using tree 0 - current tree) to demonstrate:
60
entry[1][0][0]: minikind
61
entry[1][0][1]: fingerprint
63
entry[1][0][3]: executable
64
entry[1][0][4]: packed_stat
66
entry[1][1][4]: revision_id
68
There may be multiple rows at the root, one per id present in the root, so the
69
in memory root row is now:
70
self._dirblocks[0] -> ('', [entry ...]),
71
and the entries in there are
74
entries[0][2]: file_id
75
entries[1][0]: The tree data for the current tree for this fileid at /
79
'r' is a relocated entry: This path is not present in this tree with this id,
80
but the id can be found at another location. The fingerprint is used to
81
point to the target location.
82
'a' is an absent entry: In that tree the id is not present at this path.
83
'd' is a directory entry: This path in this tree is a directory with the
84
current file id. There is no fingerprint for directories.
85
'f' is a file entry: As for directory, but its a file. The fingerprint is a
87
'l' is a symlink entry: As for directory, but a symlink. The fingerprint is the
89
't' is a reference to a nested subtree; the fingerprint is the referenced
94
The entries on disk and in memory are ordered according to the following keys:
96
directory, as a list of components
100
--- Format 1 had the following different definition: ---
101
rows = dirname, NULL, basename, NULL, MINIKIND, NULL, fileid_utf8, NULL,
102
WHOLE NUMBER (* size *), NULL, packed stat, NULL, sha1|symlink target,
104
PARENT ROW = NULL, revision_utf8, NULL, MINIKIND, NULL, dirname, NULL,
105
basename, NULL, WHOLE NUMBER (* size *), NULL, "y" | "n", NULL,
108
PARENT ROW's are emitted for every parent that is not in the ghosts details
109
line. That is, if the parents are foo, bar, baz, and the ghosts are bar, then
110
each row will have a PARENT ROW for foo and baz, but not for bar.
113
In any tree, a kind of 'moved' indicates that the fingerprint field
114
(which we treat as opaque data specific to the 'kind' anyway) has the
115
details for the id of this row in that tree.
117
I'm strongly tempted to add a id->path index as well, but I think that
118
where we need id->path mapping; we also usually read the whole file, so
119
I'm going to skip that for the moment, as we have the ability to locate
120
via bisect any path in any tree, and if we lookup things by path, we can
121
accumulate a id->path mapping as we go, which will tend to match what we
124
I plan to implement this asap, so please speak up now to alter/tweak the
125
design - and once we stabilise on this, I'll update the wiki page for
128
The rationale for all this is that we want fast operations for the
129
common case (diff/status/commit/merge on all files) and extremely fast
130
operations for the less common but still occurs a lot status/diff/commit
131
on specific files). Operations on specific files involve a scan for all
132
the children of a path, *in every involved tree*, which the current
133
format did not accommodate.
137
1) Fast end to end use for bzr's top 5 uses cases. (commmit/diff/status/merge/???)
138
2) fall back current object model as needed.
139
3) scale usably to the largest trees known today - say 50K entries. (mozilla
140
is an example of this)
144
Eventually reuse dirstate objects across locks IFF the dirstate file has not
145
been modified, but will require that we flush/ignore cached stat-hit data
146
because we wont want to restat all files on disk just because a lock was
147
acquired, yet we cannot trust the data after the previous lock was released.
149
Memory representation:
150
vector of all directories, and vector of the childen ?
152
root_entrie = (direntry for root, [parent_direntries_for_root]),
154
('', ['data for achild', 'data for bchild', 'data for cchild'])
155
('dir', ['achild', 'cchild', 'echild'])
157
- single bisect to find N subtrees from a path spec
158
- in-order for serialisation - this is 'dirblock' grouping.
159
- insertion of a file '/a' affects only the '/' child-vector, that is, to
160
insert 10K elements from scratch does not generates O(N^2) memoves of a
161
single vector, rather each individual, which tends to be limited to a
162
manageable number. Will scale badly on trees with 10K entries in a
163
single directory. compare with Inventory.InventoryDirectory which has
164
a dictionary for the children. No bisect capability, can only probe for
165
exact matches, or grab all elements and sorta.
166
- Whats the risk of error here? Once we have the base format being processed
167
we should have a net win regardless of optimality. So we are going to
168
go with what seems reasonably.
171
maybe we should do a test profile of these core structure - 10K simulated searches/lookups/etc?
173
Objects for each row?
174
The lifetime of Dirstate objects is current per lock, but see above for
175
possible extensions. The lifetime of a row from a dirstate is expected to be
176
very short in the optimistic case: which we are optimising for. For instance,
177
subtree status will determine from analysis of the disk data what rows need to
178
be examined at all, and will be able to determine from a single row whether
179
that file has altered or not, so we are aiming to process tens of thousands of
180
entries each second within the dirstate context, before exposing anything to
181
the larger codebase. This suggests we want the time for a single file
182
comparison to be < 0.1 milliseconds. That would give us 10000 paths per second
183
processed, and to scale to 100 thousand we'll another order of magnitude to do
184
that. Now, as the lifetime for all unchanged entries is the time to parse, stat
185
the file on disk, and then immediately discard, the overhead of object creation
186
becomes a significant cost.
188
Figures: Creating a tuple from from 3 elements was profiled at 0.0625
189
microseconds, whereas creating a object which is subclassed from tuple was
190
0.500 microseconds, and creating an object with 3 elements and slots was 3
191
microseconds long. 0.1 milliseconds is 100 microseconds, and ideally we'll get
192
down to 10 microseconds for the total processing - having 33% of that be object
193
creation is a huge overhead. There is a potential cost in using tuples within
194
each row which is that the conditional code to do comparisons may be slower
195
than method invocation, but method invocation is known to be slow due to stack
196
frame creation, so avoiding methods in these tight inner loops in unfortunately
197
desirable. We can consider a pyrex version of this with objects in future if
207
from stat import S_IEXEC
223
class _Bisector(object):
224
"""This just keeps track of information as we are bisecting."""
227
def pack_stat(st, _encode=binascii.b2a_base64, _pack=struct.pack):
228
"""Convert stat values into a packed representation."""
229
# jam 20060614 it isn't really worth removing more entries if we
230
# are going to leave it in packed form.
231
# With only st_mtime and st_mode filesize is 5.5M and read time is 275ms
232
# With all entries filesize is 5.9M and read time is mabye 280ms
233
# well within the noise margin
235
# base64 encoding always adds a final newline, so strip it off
236
# The current version
237
return _encode(_pack('>LLLLLL'
238
, st.st_size, int(st.st_mtime), int(st.st_ctime)
239
, st.st_dev, st.st_ino & 0xFFFFFFFF, st.st_mode))[:-1]
240
# This is 0.060s / 1.520s faster by not encoding as much information
241
# return _encode(_pack('>LL', int(st.st_mtime), st.st_mode))[:-1]
242
# This is not strictly faster than _encode(_pack())[:-1]
243
# return '%X.%X.%X.%X.%X.%X' % (
244
# st.st_size, int(st.st_mtime), int(st.st_ctime),
245
# st.st_dev, st.st_ino, st.st_mode)
246
# Similar to the _encode(_pack('>LL'))
247
# return '%X.%X' % (int(st.st_mtime), st.st_mode)
250
class DirState(object):
251
"""Record directory and metadata state for fast access.
253
A dirstate is a specialised data structure for managing local working
254
tree state information. Its not yet well defined whether it is platform
255
specific, and if it is how we detect/parameterise that.
257
Dirstates use the usual lock_write, lock_read and unlock mechanisms.
258
Unlike most bzr disk formats, DirStates must be locked for reading, using
259
lock_read. (This is an os file lock internally.) This is necessary
260
because the file can be rewritten in place.
262
DirStates must be explicitly written with save() to commit changes; just
263
unlocking them does not write the changes to disk.
266
_kind_to_minikind = {
272
'tree-reference': 't',
274
_minikind_to_kind = {
280
't': 'tree-reference',
282
_stat_to_minikind = {
287
_to_yesno = {True:'y', False: 'n'} # TODO profile the performance gain
288
# of using int conversion rather than a dict here. AND BLAME ANDREW IF
291
# TODO: jam 20070221 Figure out what to do if we have a record that exceeds
292
# the BISECT_PAGE_SIZE. For now, we just have to make it large enough
293
# that we are sure a single record will always fit.
294
BISECT_PAGE_SIZE = 4096
297
IN_MEMORY_UNMODIFIED = 1
298
IN_MEMORY_MODIFIED = 2
300
# A pack_stat (the x's) that is just noise and will never match the output
303
NULL_PARENT_DETAILS = ('a', '', 0, False, '')
305
HEADER_FORMAT_2 = '#bazaar dirstate flat format 2\n'
306
HEADER_FORMAT_3 = '#bazaar dirstate flat format 3\n'
308
def __init__(self, path):
309
"""Create a DirState object.
313
:attr _root_entrie: The root row of the directory/file information,
314
- contains the path to / - '', ''
315
- kind of 'directory',
316
- the file id of the root in utf8
319
- and no sha information.
320
:param path: The path at which the dirstate file on disk should live.
322
# _header_state and _dirblock_state represent the current state
323
# of the dirstate metadata and the per-row data respectiely.
324
# NOT_IN_MEMORY indicates that no data is in memory
325
# IN_MEMORY_UNMODIFIED indicates that what we have in memory
326
# is the same as is on disk
327
# IN_MEMORY_MODIFIED indicates that we have a modified version
328
# of what is on disk.
329
# In future we will add more granularity, for instance _dirblock_state
330
# will probably support partially-in-memory as a separate variable,
331
# allowing for partially-in-memory unmodified and partially-in-memory
333
self._header_state = DirState.NOT_IN_MEMORY
334
self._dirblock_state = DirState.NOT_IN_MEMORY
338
self._state_file = None
339
self._filename = path
340
self._lock_token = None
341
self._lock_state = None
342
self._id_index = None
343
self._end_of_header = None
344
self._cutoff_time = None
345
self._split_path_cache = {}
346
self._bisect_page_size = DirState.BISECT_PAGE_SIZE
350
(self.__class__.__name__, self._filename)
352
def add(self, path, file_id, kind, stat, fingerprint):
353
"""Add a path to be tracked.
355
:param path: The path within the dirstate - '' is the root, 'foo' is the
356
path foo within the root, 'foo/bar' is the path bar within foo
358
:param file_id: The file id of the path being added.
359
:param kind: The kind of the path, as a string like 'file',
361
:param stat: The output of os.lstat for the path.
362
:param fingerprint: The sha value of the file,
363
or the target of a symlink,
364
or the referenced revision id for tree-references,
365
or '' for directories.
368
# find the block its in.
369
# find the location in the block.
370
# check its not there
372
#------- copied from inventory.make_entry
373
# --- normalized_filename wants a unicode basename only, so get one.
374
dirname, basename = osutils.split(path)
375
# we dont import normalized_filename directly because we want to be
376
# able to change the implementation at runtime for tests.
377
norm_name, can_access = osutils.normalized_filename(basename)
378
if norm_name != basename:
382
raise errors.InvalidNormalization(path)
383
# you should never have files called . or ..; just add the directory
384
# in the parent, or according to the special treatment for the root
385
if basename == '.' or basename == '..':
386
raise errors.InvalidEntryName(path)
387
# now that we've normalised, we need the correct utf8 path and
388
# dirname and basename elements. This single encode and split should be
389
# faster than three separate encodes.
390
utf8path = (dirname + '/' + basename).strip('/').encode('utf8')
391
dirname, basename = osutils.split(utf8path)
392
assert file_id.__class__ == str, \
393
"must be a utf8 file_id not %s" % (type(file_id))
394
# Make sure the file_id does not exist in this tree
395
file_id_entry = self._get_entry(0, fileid_utf8=file_id)
396
if file_id_entry != (None, None):
397
path = osutils.pathjoin(file_id_entry[0][0], file_id_entry[0][1])
398
kind = DirState._minikind_to_kind[file_id_entry[1][0][0]]
399
info = '%s:%s' % (kind, path)
400
raise errors.DuplicateFileId(file_id, info)
401
first_key = (dirname, basename, '')
402
block_index, present = self._find_block_index_from_key(first_key)
404
# check the path is not in the tree
405
block = self._dirblocks[block_index][1]
406
entry_index, _ = self._find_entry_index(first_key, block)
407
while (entry_index < len(block) and
408
block[entry_index][0][0:2] == first_key[0:2]):
409
if block[entry_index][1][0][0] not in 'ar':
410
# this path is in the dirstate in the current tree.
411
raise Exception, "adding already added path!"
414
# The block where we want to put the file is not present. But it
415
# might be because the directory was empty, or not loaded yet. Look
416
# for a parent entry, if not found, raise NotVersionedError
417
parent_dir, parent_base = osutils.split(dirname)
418
parent_block_idx, parent_entry_idx, _, parent_present = \
419
self._get_block_entry_index(parent_dir, parent_base, 0)
420
if not parent_present:
421
raise errors.NotVersionedError(path, str(self))
422
self._ensure_block(parent_block_idx, parent_entry_idx, dirname)
423
block = self._dirblocks[block_index][1]
424
entry_key = (dirname, basename, file_id)
427
packed_stat = DirState.NULLSTAT
430
packed_stat = pack_stat(stat)
431
parent_info = self._empty_parent_info()
432
minikind = DirState._kind_to_minikind[kind]
434
entry_data = entry_key, [
435
(minikind, fingerprint, size, False, packed_stat),
437
elif kind == 'directory':
438
entry_data = entry_key, [
439
(minikind, '', 0, False, packed_stat),
441
elif kind == 'symlink':
442
entry_data = entry_key, [
443
(minikind, fingerprint, size, False, packed_stat),
445
elif kind == 'tree-reference':
446
entry_data = entry_key, [
447
(minikind, fingerprint, 0, False, packed_stat),
450
raise errors.BzrError('unknown kind %r' % kind)
451
entry_index, present = self._find_entry_index(entry_key, block)
453
block.insert(entry_index, entry_data)
455
assert block[entry_index][1][0][0] == 'a', " %r(%r) already added" % (basename, file_id)
456
block[entry_index][1][0] = entry_data[1][0]
458
if kind == 'directory':
459
# insert a new dirblock
460
self._ensure_block(block_index, entry_index, utf8path)
461
self._dirblock_state = DirState.IN_MEMORY_MODIFIED
463
self._id_index.setdefault(entry_key[2], set()).add(entry_key)
465
def _bisect(self, dir_name_list):
466
"""Bisect through the disk structure for specific rows.
468
:param dir_name_list: A list of (dir, name) pairs.
469
:return: A dict mapping (dir, name) => entry for found entries. Missing
470
entries will not be in the map.
472
self._requires_lock()
473
# We need the file pointer to be right after the initial header block
474
self._read_header_if_needed()
475
# If _dirblock_state was in memory, we should just return info from
476
# there, this function is only meant to handle when we want to read
478
assert self._dirblock_state == DirState.NOT_IN_MEMORY
480
# The disk representation is generally info + '\0\n\0' at the end. But
481
# for bisecting, it is easier to treat this as '\0' + info + '\0\n'
482
# Because it means we can sync on the '\n'
483
state_file = self._state_file
484
file_size = os.fstat(state_file.fileno()).st_size
485
# We end up with 2 extra fields, we should have a trailing '\n' to
486
# ensure that we read the whole record, and we should have a precursur
487
# '' which ensures that we start after the previous '\n'
488
entry_field_count = self._fields_per_entry() + 1
490
low = self._end_of_header
491
high = file_size - 1 # Ignore the final '\0'
492
# Map from (dir, name) => entry
495
# Avoid infinite seeking
496
max_count = 30*len(dir_name_list)
498
# pending is a list of places to look.
499
# each entry is a tuple of low, high, dir_names
500
# low -> the first byte offset to read (inclusive)
501
# high -> the last byte offset (inclusive)
502
# dir_names -> The list of (dir, name) pairs that should be found in
503
# the [low, high] range
504
pending = [(low, high, dir_name_list)]
506
page_size = self._bisect_page_size
508
fields_to_entry = self._get_fields_to_entry()
511
low, high, cur_files = pending.pop()
513
if not cur_files or low >= high:
518
if count > max_count:
519
raise errors.BzrError('Too many seeks, most likely a bug.')
521
mid = max(low, (low+high-page_size)/2)
524
# limit the read size, so we don't end up reading data that we have
526
read_size = min(page_size, (high-mid)+1)
527
block = state_file.read(read_size)
530
entries = block.split('\n')
533
# We didn't find a '\n', so we cannot have found any records.
534
# So put this range back and try again. But we know we have to
535
# increase the page size, because a single read did not contain
536
# a record break (so records must be larger than page_size)
538
pending.append((low, high, cur_files))
541
# Check the first and last entries, in case they are partial, or if
542
# we don't care about the rest of this page
544
first_fields = entries[0].split('\0')
545
if len(first_fields) < entry_field_count:
546
# We didn't get the complete first entry
547
# so move start, and grab the next, which
548
# should be a full entry
549
start += len(entries[0])+1
550
first_fields = entries[1].split('\0')
553
if len(first_fields) <= 2:
554
# We didn't even get a filename here... what do we do?
555
# Try a large page size and repeat this query
557
pending.append((low, high, cur_files))
560
# Find what entries we are looking for, which occur before and
561
# after this first record.
563
first_dir_name = (first_fields[1], first_fields[2])
564
first_loc = bisect.bisect_left(cur_files, first_dir_name)
566
# These exist before the current location
567
pre = cur_files[:first_loc]
568
# These occur after the current location, which may be in the
569
# data we read, or might be after the last entry
570
post = cur_files[first_loc:]
572
if post and len(first_fields) >= entry_field_count:
573
# We have files after the first entry
575
# Parse the last entry
576
last_entry_num = len(entries)-1
577
last_fields = entries[last_entry_num].split('\0')
578
if len(last_fields) < entry_field_count:
579
# The very last hunk was not complete,
580
# read the previous hunk
581
after = mid + len(block) - len(entries[-1])
583
last_fields = entries[last_entry_num].split('\0')
585
after = mid + len(block)
587
last_dir_name = (last_fields[1], last_fields[2])
588
last_loc = bisect.bisect_right(post, last_dir_name)
590
middle_files = post[:last_loc]
591
post = post[last_loc:]
594
# We have files that should occur in this block
595
# (>= first, <= last)
596
# Either we will find them here, or we can mark them as
599
if middle_files[0] == first_dir_name:
600
# We might need to go before this location
601
pre.append(first_dir_name)
602
if middle_files[-1] == last_dir_name:
603
post.insert(0, last_dir_name)
605
# Find out what paths we have
606
paths = {first_dir_name:[first_fields]}
607
# last_dir_name might == first_dir_name so we need to be
608
# careful if we should append rather than overwrite
609
if last_entry_num != first_entry_num:
610
paths.setdefault(last_dir_name, []).append(last_fields)
611
for num in xrange(first_entry_num+1, last_entry_num):
612
# TODO: jam 20070223 We are already splitting here, so
613
# shouldn't we just split the whole thing rather
614
# than doing the split again in add_one_record?
615
fields = entries[num].split('\0')
616
dir_name = (fields[1], fields[2])
617
paths.setdefault(dir_name, []).append(fields)
619
for dir_name in middle_files:
620
for fields in paths.get(dir_name, []):
621
# offset by 1 because of the opening '\0'
622
# consider changing fields_to_entry to avoid the
624
entry = fields_to_entry(fields[1:])
625
found.setdefault(dir_name, []).append(entry)
627
# Now we have split up everything into pre, middle, and post, and
628
# we have handled everything that fell in 'middle'.
629
# We add 'post' first, so that we prefer to seek towards the
630
# beginning, so that we will tend to go as early as we need, and
631
# then only seek forward after that.
633
pending.append((after, high, post))
635
pending.append((low, start-1, pre))
637
# Consider that we may want to return the directory entries in sorted
638
# order. For now, we just return them in whatever order we found them,
639
# and leave it up to the caller if they care if it is ordered or not.
642
def _bisect_dirblocks(self, dir_list):
643
"""Bisect through the disk structure to find entries in given dirs.
645
_bisect_dirblocks is meant to find the contents of directories, which
646
differs from _bisect, which only finds individual entries.
648
:param dir_list: An sorted list of directory names ['', 'dir', 'foo'].
649
:return: A map from dir => entries_for_dir
651
# TODO: jam 20070223 A lot of the bisecting logic could be shared
652
# between this and _bisect. It would require parameterizing the
653
# inner loop with a function, though. We should evaluate the
654
# performance difference.
655
self._requires_lock()
656
# We need the file pointer to be right after the initial header block
657
self._read_header_if_needed()
658
# If _dirblock_state was in memory, we should just return info from
659
# there, this function is only meant to handle when we want to read
661
assert self._dirblock_state == DirState.NOT_IN_MEMORY
663
# The disk representation is generally info + '\0\n\0' at the end. But
664
# for bisecting, it is easier to treat this as '\0' + info + '\0\n'
665
# Because it means we can sync on the '\n'
666
state_file = self._state_file
667
file_size = os.fstat(state_file.fileno()).st_size
668
# We end up with 2 extra fields, we should have a trailing '\n' to
669
# ensure that we read the whole record, and we should have a precursur
670
# '' which ensures that we start after the previous '\n'
671
entry_field_count = self._fields_per_entry() + 1
673
low = self._end_of_header
674
high = file_size - 1 # Ignore the final '\0'
675
# Map from dir => entry
678
# Avoid infinite seeking
679
max_count = 30*len(dir_list)
681
# pending is a list of places to look.
682
# each entry is a tuple of low, high, dir_names
683
# low -> the first byte offset to read (inclusive)
684
# high -> the last byte offset (inclusive)
685
# dirs -> The list of directories that should be found in
686
# the [low, high] range
687
pending = [(low, high, dir_list)]
689
page_size = self._bisect_page_size
691
fields_to_entry = self._get_fields_to_entry()
694
low, high, cur_dirs = pending.pop()
696
if not cur_dirs or low >= high:
701
if count > max_count:
702
raise errors.BzrError('Too many seeks, most likely a bug.')
704
mid = max(low, (low+high-page_size)/2)
707
# limit the read size, so we don't end up reading data that we have
709
read_size = min(page_size, (high-mid)+1)
710
block = state_file.read(read_size)
713
entries = block.split('\n')
716
# We didn't find a '\n', so we cannot have found any records.
717
# So put this range back and try again. But we know we have to
718
# increase the page size, because a single read did not contain
719
# a record break (so records must be larger than page_size)
721
pending.append((low, high, cur_dirs))
724
# Check the first and last entries, in case they are partial, or if
725
# we don't care about the rest of this page
727
first_fields = entries[0].split('\0')
728
if len(first_fields) < entry_field_count:
729
# We didn't get the complete first entry
730
# so move start, and grab the next, which
731
# should be a full entry
732
start += len(entries[0])+1
733
first_fields = entries[1].split('\0')
736
if len(first_fields) <= 1:
737
# We didn't even get a dirname here... what do we do?
738
# Try a large page size and repeat this query
740
pending.append((low, high, cur_dirs))
743
# Find what entries we are looking for, which occur before and
744
# after this first record.
746
first_dir = first_fields[1]
747
first_loc = bisect.bisect_left(cur_dirs, first_dir)
749
# These exist before the current location
750
pre = cur_dirs[:first_loc]
751
# These occur after the current location, which may be in the
752
# data we read, or might be after the last entry
753
post = cur_dirs[first_loc:]
755
if post and len(first_fields) >= entry_field_count:
756
# We have records to look at after the first entry
758
# Parse the last entry
759
last_entry_num = len(entries)-1
760
last_fields = entries[last_entry_num].split('\0')
761
if len(last_fields) < entry_field_count:
762
# The very last hunk was not complete,
763
# read the previous hunk
764
after = mid + len(block) - len(entries[-1])
766
last_fields = entries[last_entry_num].split('\0')
768
after = mid + len(block)
770
last_dir = last_fields[1]
771
last_loc = bisect.bisect_right(post, last_dir)
773
middle_files = post[:last_loc]
774
post = post[last_loc:]
777
# We have files that should occur in this block
778
# (>= first, <= last)
779
# Either we will find them here, or we can mark them as
782
if middle_files[0] == first_dir:
783
# We might need to go before this location
784
pre.append(first_dir)
785
if middle_files[-1] == last_dir:
786
post.insert(0, last_dir)
788
# Find out what paths we have
789
paths = {first_dir:[first_fields]}
790
# last_dir might == first_dir so we need to be
791
# careful if we should append rather than overwrite
792
if last_entry_num != first_entry_num:
793
paths.setdefault(last_dir, []).append(last_fields)
794
for num in xrange(first_entry_num+1, last_entry_num):
795
# TODO: jam 20070223 We are already splitting here, so
796
# shouldn't we just split the whole thing rather
797
# than doing the split again in add_one_record?
798
fields = entries[num].split('\0')
799
paths.setdefault(fields[1], []).append(fields)
801
for cur_dir in middle_files:
802
for fields in paths.get(cur_dir, []):
803
# offset by 1 because of the opening '\0'
804
# consider changing fields_to_entry to avoid the
806
entry = fields_to_entry(fields[1:])
807
found.setdefault(cur_dir, []).append(entry)
809
# Now we have split up everything into pre, middle, and post, and
810
# we have handled everything that fell in 'middle'.
811
# We add 'post' first, so that we prefer to seek towards the
812
# beginning, so that we will tend to go as early as we need, and
813
# then only seek forward after that.
815
pending.append((after, high, post))
817
pending.append((low, start-1, pre))
821
def _bisect_recursive(self, dir_name_list):
822
"""Bisect for entries for all paths and their children.
824
This will use bisect to find all records for the supplied paths. It
825
will then continue to bisect for any records which are marked as
826
directories. (and renames?)
828
:param paths: A sorted list of (dir, name) pairs
829
eg: [('', 'a'), ('', 'f'), ('a/b', 'c')]
830
:return: A dictionary mapping (dir, name, file_id) => [tree_info]
832
# Map from (dir, name, file_id) => [tree_info]
835
found_dir_names = set()
837
# Directories that have been read
838
processed_dirs = set()
839
# Get the ball rolling with the first bisect for all entries.
840
newly_found = self._bisect(dir_name_list)
843
# Directories that need to be read
845
paths_to_search = set()
846
for entry_list in newly_found.itervalues():
847
for dir_name_id, trees_info in entry_list:
848
found[dir_name_id] = trees_info
849
found_dir_names.add(dir_name_id[:2])
851
for tree_info in trees_info:
852
minikind = tree_info[0]
855
# We already processed this one as a directory,
856
# we don't need to do the extra work again.
858
subdir, name, file_id = dir_name_id
859
path = osutils.pathjoin(subdir, name)
861
if path not in processed_dirs:
862
pending_dirs.add(path)
863
elif minikind == 'r':
864
# Rename, we need to directly search the target
865
# which is contained in the fingerprint column
866
dir_name = osutils.split(tree_info[1])
867
if dir_name[0] in pending_dirs:
868
# This entry will be found in the dir search
870
# TODO: We need to check if this entry has
871
# already been found. Otherwise we might be
872
# hitting infinite recursion.
873
if dir_name not in found_dir_names:
874
paths_to_search.add(dir_name)
875
# Now we have a list of paths to look for directly, and
876
# directory blocks that need to be read.
877
# newly_found is mixing the keys between (dir, name) and path
878
# entries, but that is okay, because we only really care about the
880
newly_found = self._bisect(sorted(paths_to_search))
881
newly_found.update(self._bisect_dirblocks(sorted(pending_dirs)))
882
processed_dirs.update(pending_dirs)
885
def _empty_parent_info(self):
886
return [DirState.NULL_PARENT_DETAILS] * (len(self._parents) -
889
def _ensure_block(self, parent_block_index, parent_row_index, dirname):
890
"""Ensure a block for dirname exists.
892
This function exists to let callers which know that there is a
893
directory dirname ensure that the block for it exists. This block can
894
fail to exist because of demand loading, or because a directory had no
895
children. In either case it is not an error. It is however an error to
896
call this if there is no parent entry for the directory, and thus the
897
function requires the coordinates of such an entry to be provided.
899
The root row is special cased and can be indicated with a parent block
902
:param parent_block_index: The index of the block in which dirname's row
904
:param parent_row_index: The index in the parent block where the row
906
:param dirname: The utf8 dirname to ensure there is a block for.
907
:return: The index for the block.
909
if dirname == '' and parent_row_index == 0 and parent_block_index == 0:
910
# This is the signature of the root row, and the
911
# contents-of-root row is always index 1
913
# the basename of the directory must be the end of its full name.
914
if not (parent_block_index == -1 and
915
parent_block_index == -1 and dirname == ''):
916
assert dirname.endswith(
917
self._dirblocks[parent_block_index][1][parent_row_index][0][1])
918
block_index, present = self._find_block_index_from_key((dirname, '', ''))
920
## In future, when doing partial parsing, this should load and
921
# populate the entire block.
922
self._dirblocks.insert(block_index, (dirname, []))
925
def _entries_to_current_state(self, new_entries):
926
"""Load new_entries into self.dirblocks.
928
Process new_entries into the current state object, making them the active
929
state. The entries are grouped together by directory to form dirblocks.
931
:param new_entries: A sorted list of entries. This function does not sort
932
to prevent unneeded overhead when callers have a sorted list already.
935
assert new_entries[0][0][0:2] == ('', ''), \
936
"Missing root row %r" % (new_entries[0][0],)
937
# The two blocks here are deliberate: the root block and the
938
# contents-of-root block.
939
self._dirblocks = [('', []), ('', [])]
940
current_block = self._dirblocks[0][1]
943
append_entry = current_block.append
944
for entry in new_entries:
945
if entry[0][0] != current_dirname:
946
# new block - different dirname
948
current_dirname = entry[0][0]
949
self._dirblocks.append((current_dirname, current_block))
950
append_entry = current_block.append
951
# append the entry to the current block
953
self._split_root_dirblock_into_contents()
955
def _split_root_dirblock_into_contents(self):
956
"""Split the root dirblocks into root and contents-of-root.
958
After parsing by path, we end up with root entries and contents-of-root
959
entries in the same block. This loop splits them out again.
961
# The above loop leaves the "root block" entries mixed with the
962
# "contents-of-root block". But we don't want an if check on
963
# all entries, so instead we just fix it up here.
964
assert self._dirblocks[1] == ('', [])
966
contents_of_root_block = []
967
for entry in self._dirblocks[0][1]:
968
if not entry[0][1]: # This is a root entry
969
root_block.append(entry)
971
contents_of_root_block.append(entry)
972
self._dirblocks[0] = ('', root_block)
973
self._dirblocks[1] = ('', contents_of_root_block)
975
def _entry_to_line(self, entry):
976
"""Serialize entry to a NULL delimited line ready for _get_output_lines.
978
:param entry: An entry_tuple as defined in the module docstring.
980
entire_entry = list(entry[0])
981
for tree_number, tree_data in enumerate(entry[1]):
982
# (minikind, fingerprint, size, executable, tree_specific_string)
983
entire_entry.extend(tree_data)
984
# 3 for the key, 5 for the fields per tree.
985
tree_offset = 3 + tree_number * 5
987
entire_entry[tree_offset + 0] = tree_data[0]
989
entire_entry[tree_offset + 2] = str(tree_data[2])
991
entire_entry[tree_offset + 3] = DirState._to_yesno[tree_data[3]]
992
return '\0'.join(entire_entry)
994
def _fields_per_entry(self):
995
"""How many null separated fields should be in each entry row.
997
Each line now has an extra '\n' field which is not used
998
so we just skip over it
1000
3 fields for the key
1001
+ number of fields per tree_data (5) * tree count
1004
tree_count = 1 + self._num_present_parents()
1005
return 3 + 5 * tree_count + 1
1007
def _find_block(self, key, add_if_missing=False):
1008
"""Return the block that key should be present in.
1010
:param key: A dirstate entry key.
1011
:return: The block tuple.
1013
block_index, present = self._find_block_index_from_key(key)
1015
if not add_if_missing:
1016
# check to see if key is versioned itself - we might want to
1017
# add it anyway, because dirs with no entries dont get a
1018
# dirblock at parse time.
1019
# This is an uncommon branch to take: most dirs have children,
1020
# and most code works with versioned paths.
1021
parent_base, parent_name = osutils.split(key[0])
1022
if not self._get_block_entry_index(parent_base, parent_name, 0)[3]:
1023
# some parent path has not been added - its an error to add
1025
raise errors.NotVersionedError(key[0:2], str(self))
1026
self._dirblocks.insert(block_index, (key[0], []))
1027
return self._dirblocks[block_index]
1029
def _find_block_index_from_key(self, key):
1030
"""Find the dirblock index for a key.
1032
:return: The block index, True if the block for the key is present.
1034
if key[0:2] == ('', ''):
1036
block_index = bisect_dirblock(self._dirblocks, key[0], 1,
1037
cache=self._split_path_cache)
1038
# _right returns one-past-where-key is so we have to subtract
1039
# one to use it. we use _right here because there are two
1040
# '' blocks - the root, and the contents of root
1041
# we always have a minimum of 2 in self._dirblocks: root and
1042
# root-contents, and for '', we get 2 back, so this is
1043
# simple and correct:
1044
present = (block_index < len(self._dirblocks) and
1045
self._dirblocks[block_index][0] == key[0])
1046
return block_index, present
1048
def _find_entry_index(self, key, block):
1049
"""Find the entry index for a key in a block.
1051
:return: The entry index, True if the entry for the key is present.
1053
entry_index = bisect.bisect_left(block, (key, []))
1054
present = (entry_index < len(block) and
1055
block[entry_index][0] == key)
1056
return entry_index, present
1059
def from_tree(tree, dir_state_filename):
1060
"""Create a dirstate from a bzr Tree.
1062
:param tree: The tree which should provide parent information and
1064
:return: a DirState object which is currently locked for writing.
1065
(it was locked by DirState.initialize)
1067
result = DirState.initialize(dir_state_filename)
1071
parent_ids = tree.get_parent_ids()
1072
num_parents = len(parent_ids)
1074
for parent_id in parent_ids:
1075
parent_tree = tree.branch.repository.revision_tree(parent_id)
1076
parent_trees.append((parent_id, parent_tree))
1077
parent_tree.lock_read()
1078
result.set_parent_trees(parent_trees, [])
1079
result.set_state_from_inventory(tree.inventory)
1081
for revid, parent_tree in parent_trees:
1082
parent_tree.unlock()
1085
# The caller won't have a chance to unlock this, so make sure we
1091
def update_entry(self, entry, abspath, stat_value,
1092
_stat_to_minikind=_stat_to_minikind,
1093
_pack_stat=pack_stat):
1094
"""Update the entry based on what is actually on disk.
1096
:param entry: This is the dirblock entry for the file in question.
1097
:param abspath: The path on disk for this file.
1098
:param stat_value: (optional) if we already have done a stat on the
1100
:return: The sha1 hexdigest of the file (40 bytes) or link target of a
1104
minikind = _stat_to_minikind[stat_value.st_mode & 0170000]
1108
packed_stat = _pack_stat(stat_value)
1109
(saved_minikind, saved_link_or_sha1, saved_file_size,
1110
saved_executable, saved_packed_stat) = entry[1][0]
1112
if (minikind == saved_minikind
1113
and packed_stat == saved_packed_stat):
1114
# The stat hasn't changed since we saved, so we can re-use the
1119
# size should also be in packed_stat
1120
if saved_file_size == stat_value.st_size:
1121
return saved_link_or_sha1
1123
# If we have gotten this far, that means that we need to actually
1124
# process this entry.
1127
link_or_sha1 = self._sha1_file(abspath, entry)
1128
executable = self._is_executable(stat_value.st_mode,
1130
if self._cutoff_time is None:
1131
self._sha_cutoff_time()
1132
if (stat_value.st_mtime < self._cutoff_time
1133
and stat_value.st_ctime < self._cutoff_time):
1134
entry[1][0] = ('f', link_or_sha1, stat_value.st_size,
1135
executable, packed_stat)
1137
entry[1][0] = ('f', '', stat_value.st_size,
1138
executable, DirState.NULLSTAT)
1139
elif minikind == 'd':
1141
entry[1][0] = ('d', '', 0, False, packed_stat)
1142
if saved_minikind != 'd':
1143
# This changed from something into a directory. Make sure we
1144
# have a directory block for it. This doesn't happen very
1145
# often, so this doesn't have to be super fast.
1146
block_index, entry_index, dir_present, file_present = \
1147
self._get_block_entry_index(entry[0][0], entry[0][1], 0)
1148
self._ensure_block(block_index, entry_index,
1149
osutils.pathjoin(entry[0][0], entry[0][1]))
1150
elif minikind == 'l':
1151
link_or_sha1 = self._read_link(abspath, saved_link_or_sha1)
1152
if self._cutoff_time is None:
1153
self._sha_cutoff_time()
1154
if (stat_value.st_mtime < self._cutoff_time
1155
and stat_value.st_ctime < self._cutoff_time):
1156
entry[1][0] = ('l', link_or_sha1, stat_value.st_size,
1159
entry[1][0] = ('l', '', stat_value.st_size,
1160
False, DirState.NULLSTAT)
1161
self._dirblock_state = DirState.IN_MEMORY_MODIFIED
1164
def _sha_cutoff_time(self):
1165
"""Return cutoff time.
1167
Files modified more recently than this time are at risk of being
1168
undetectably modified and so can't be cached.
1170
# Cache the cutoff time as long as we hold a lock.
1171
# time.time() isn't super expensive (approx 3.38us), but
1172
# when you call it 50,000 times it adds up.
1173
# For comparison, os.lstat() costs 7.2us if it is hot.
1174
self._cutoff_time = int(time.time()) - 3
1175
return self._cutoff_time
1177
def _lstat(self, abspath, entry):
1178
"""Return the os.lstat value for this path."""
1179
return os.lstat(abspath)
1181
def _sha1_file(self, abspath, entry):
1182
"""Calculate the SHA1 of a file by reading the full text"""
1183
f = file(abspath, 'rb', buffering=65000)
1185
return osutils.sha_file(f)
1189
def _is_executable(self, mode, old_executable):
1190
"""Is this file executable?"""
1191
return bool(S_IEXEC & mode)
1193
def _is_executable_win32(self, mode, old_executable):
1194
"""On win32 the executable bit is stored in the dirstate."""
1195
return old_executable
1197
if sys.platform == 'win32':
1198
_is_executable = _is_executable_win32
1200
def _read_link(self, abspath, old_link):
1201
"""Read the target of a symlink"""
1202
# TODO: jam 200700301 On Win32, this could just return the value
1203
# already in memory. However, this really needs to be done at a
1204
# higher level, because there either won't be anything on disk,
1205
# or the thing on disk will be a file.
1206
return os.readlink(abspath)
1208
def get_ghosts(self):
1209
"""Return a list of the parent tree revision ids that are ghosts."""
1210
self._read_header_if_needed()
1213
def get_lines(self):
1214
"""Serialise the entire dirstate to a sequence of lines."""
1215
if (self._header_state == DirState.IN_MEMORY_UNMODIFIED and
1216
self._dirblock_state == DirState.IN_MEMORY_UNMODIFIED):
1217
# read whats on disk.
1218
self._state_file.seek(0)
1219
return self._state_file.readlines()
1221
lines.append(self._get_parents_line(self.get_parent_ids()))
1222
lines.append(self._get_ghosts_line(self._ghosts))
1223
# append the root line which is special cased
1224
lines.extend(map(self._entry_to_line, self._iter_entries()))
1225
return self._get_output_lines(lines)
1227
def _get_ghosts_line(self, ghost_ids):
1228
"""Create a line for the state file for ghost information."""
1229
return '\0'.join([str(len(ghost_ids))] + ghost_ids)
1231
def _get_parents_line(self, parent_ids):
1232
"""Create a line for the state file for parents information."""
1233
return '\0'.join([str(len(parent_ids))] + parent_ids)
1235
def _get_fields_to_entry(self):
1236
"""Get a function which converts entry fields into a entry record.
1238
This handles size and executable, as well as parent records.
1240
:return: A function which takes a list of fields, and returns an
1241
appropriate record for storing in memory.
1243
# This is intentionally unrolled for performance
1244
num_present_parents = self._num_present_parents()
1245
if num_present_parents == 0:
1246
def fields_to_entry_0_parents(fields, _int=int):
1247
path_name_file_id_key = (fields[0], fields[1], fields[2])
1248
return (path_name_file_id_key, [
1250
fields[3], # minikind
1251
fields[4], # fingerprint
1252
_int(fields[5]), # size
1253
fields[6] == 'y', # executable
1254
fields[7], # packed_stat or revision_id
1256
return fields_to_entry_0_parents
1257
elif num_present_parents == 1:
1258
def fields_to_entry_1_parent(fields, _int=int):
1259
path_name_file_id_key = (fields[0], fields[1], fields[2])
1260
return (path_name_file_id_key, [
1262
fields[3], # minikind
1263
fields[4], # fingerprint
1264
_int(fields[5]), # size
1265
fields[6] == 'y', # executable
1266
fields[7], # packed_stat or revision_id
1269
fields[8], # minikind
1270
fields[9], # fingerprint
1271
_int(fields[10]), # size
1272
fields[11] == 'y', # executable
1273
fields[12], # packed_stat or revision_id
1276
return fields_to_entry_1_parent
1277
elif num_present_parents == 2:
1278
def fields_to_entry_2_parents(fields, _int=int):
1279
path_name_file_id_key = (fields[0], fields[1], fields[2])
1280
return (path_name_file_id_key, [
1282
fields[3], # minikind
1283
fields[4], # fingerprint
1284
_int(fields[5]), # size
1285
fields[6] == 'y', # executable
1286
fields[7], # packed_stat or revision_id
1289
fields[8], # minikind
1290
fields[9], # fingerprint
1291
_int(fields[10]), # size
1292
fields[11] == 'y', # executable
1293
fields[12], # packed_stat or revision_id
1296
fields[13], # minikind
1297
fields[14], # fingerprint
1298
_int(fields[15]), # size
1299
fields[16] == 'y', # executable
1300
fields[17], # packed_stat or revision_id
1303
return fields_to_entry_2_parents
1305
def fields_to_entry_n_parents(fields, _int=int):
1306
path_name_file_id_key = (fields[0], fields[1], fields[2])
1307
trees = [(fields[cur], # minikind
1308
fields[cur+1], # fingerprint
1309
_int(fields[cur+2]), # size
1310
fields[cur+3] == 'y', # executable
1311
fields[cur+4], # stat or revision_id
1312
) for cur in xrange(3, len(fields)-1, 5)]
1313
return path_name_file_id_key, trees
1314
return fields_to_entry_n_parents
1316
def get_parent_ids(self):
1317
"""Return a list of the parent tree ids for the directory state."""
1318
self._read_header_if_needed()
1319
return list(self._parents)
1321
def _get_block_entry_index(self, dirname, basename, tree_index):
1322
"""Get the coordinates for a path in the state structure.
1324
:param dirname: The utf8 dirname to lookup.
1325
:param basename: The utf8 basename to lookup.
1326
:param tree_index: The index of the tree for which this lookup should
1328
:return: A tuple describing where the path is located, or should be
1329
inserted. The tuple contains four fields: the block index, the row
1330
index, anda two booleans are True when the directory is present, and
1331
when the entire path is present. There is no guarantee that either
1332
coordinate is currently reachable unless the found field for it is
1333
True. For instance, a directory not present in the searched tree
1334
may be returned with a value one greater than the current highest
1335
block offset. The directory present field will always be True when
1336
the path present field is True. The directory present field does
1337
NOT indicate that the directory is present in the searched tree,
1338
rather it indicates that there are at least some files in some
1341
self._read_dirblocks_if_needed()
1342
key = dirname, basename, ''
1343
block_index, present = self._find_block_index_from_key(key)
1345
# no such directory - return the dir index and 0 for the row.
1346
return block_index, 0, False, False
1347
block = self._dirblocks[block_index][1] # access the entries only
1348
entry_index, present = self._find_entry_index(key, block)
1349
# linear search through present entries at this path to find the one
1351
while entry_index < len(block) and block[entry_index][0][1] == basename:
1352
if block[entry_index][1][tree_index][0] not in \
1353
('a', 'r'): # absent, relocated
1354
return block_index, entry_index, True, True
1356
return block_index, entry_index, True, False
1358
def _get_entry(self, tree_index, fileid_utf8=None, path_utf8=None):
1359
"""Get the dirstate entry for path in tree tree_index
1361
If either file_id or path is supplied, it is used as the key to lookup.
1362
If both are supplied, the fastest lookup is used, and an error is
1363
raised if they do not both point at the same row.
1365
:param tree_index: The index of the tree we wish to locate this path
1366
in. If the path is present in that tree, the entry containing its
1367
details is returned, otherwise (None, None) is returned
1368
0 is the working tree, higher indexes are successive parent
1370
:param fileid_utf8: A utf8 file_id to look up.
1371
:param path_utf8: An utf8 path to be looked up.
1372
:return: The dirstate entry tuple for path, or (None, None)
1374
self._read_dirblocks_if_needed()
1375
if path_utf8 is not None:
1376
assert path_utf8.__class__ == str, 'path_utf8 is not a str: %s %s' % (type(path_utf8), path_utf8)
1377
# path lookups are faster
1378
dirname, basename = osutils.split(path_utf8)
1379
block_index, entry_index, dir_present, file_present = \
1380
self._get_block_entry_index(dirname, basename, tree_index)
1381
if not file_present:
1383
entry = self._dirblocks[block_index][1][entry_index]
1384
assert entry[0][2] and entry[1][tree_index][0] not in ('a', 'r'), 'unversioned entry?!?!'
1386
if entry[0][2] != fileid_utf8:
1387
raise errors.BzrError('integrity error ? : mismatching'
1388
' tree_index, file_id and path')
1391
assert fileid_utf8 is not None
1392
possible_keys = self._get_id_index().get(fileid_utf8, None)
1393
if not possible_keys:
1395
for key in possible_keys:
1396
block_index, present = \
1397
self._find_block_index_from_key(key)
1398
# strange, probably indicates an out of date
1399
# id index - for now, allow this.
1402
# WARNING: DO not change this code to use _get_block_entry_index
1403
# as that function is not suitable: it does not use the key
1404
# to lookup, and thus the wront coordinates are returned.
1405
block = self._dirblocks[block_index][1]
1406
entry_index, present = self._find_entry_index(key, block)
1408
entry = self._dirblocks[block_index][1][entry_index]
1409
if entry[1][tree_index][0] in 'fdlt':
1410
# this is the result we are looking for: the
1411
# real home of this file_id in this tree.
1413
if entry[1][tree_index][0] == 'a':
1414
# there is no home for this entry in this tree
1416
assert entry[1][tree_index][0] == 'r', \
1417
"entry %r has invalid minikind %r for tree %r" \
1419
entry[1][tree_index][0],
1421
real_path = entry[1][tree_index][1]
1422
return self._get_entry(tree_index, fileid_utf8=fileid_utf8,
1423
path_utf8=real_path)
1427
def initialize(cls, path):
1428
"""Create a new dirstate on path.
1430
The new dirstate will be an empty tree - that is it has no parents,
1431
and only a root node - which has id ROOT_ID.
1433
:param path: The name of the file for the dirstate.
1434
:return: A write-locked DirState object.
1436
# This constructs a new DirState object on a path, sets the _state_file
1437
# to a new empty file for that path. It then calls _set_data() with our
1438
# stock empty dirstate information - a root with ROOT_ID, no children,
1439
# and no parents. Finally it calls save() to ensure that this data will
1442
# root dir and root dir contents with no children.
1443
empty_tree_dirblocks = [('', []), ('', [])]
1444
# a new root directory, with a NULLSTAT.
1445
empty_tree_dirblocks[0][1].append(
1446
(('', '', inventory.ROOT_ID), [
1447
('d', '', 0, False, DirState.NULLSTAT),
1451
result._set_data([], empty_tree_dirblocks)
1458
def _inv_entry_to_details(self, inv_entry):
1459
"""Convert an inventory entry (from a revision tree) to state details.
1461
:param inv_entry: An inventory entry whose sha1 and link targets can be
1462
relied upon, and which has a revision set.
1463
:return: A details tuple - the details for a single tree at a path +
1466
kind = inv_entry.kind
1467
minikind = DirState._kind_to_minikind[kind]
1468
tree_data = inv_entry.revision
1469
assert len(tree_data) > 0, 'empty revision for the inv_entry.'
1470
if kind == 'directory':
1474
elif kind == 'symlink':
1475
fingerprint = inv_entry.symlink_target or ''
1478
elif kind == 'file':
1479
fingerprint = inv_entry.text_sha1 or ''
1480
size = inv_entry.text_size or 0
1481
executable = inv_entry.executable
1482
elif kind == 'tree-reference':
1483
fingerprint = inv_entry.reference_revision or ''
1487
raise Exception("can't pack %s" % inv_entry)
1488
return (minikind, fingerprint, size, executable, tree_data)
1490
def _iter_entries(self):
1491
"""Iterate over all the entries in the dirstate.
1493
Each yelt item is an entry in the standard format described in the
1494
docstring of bzrlib.dirstate.
1496
self._read_dirblocks_if_needed()
1497
for directory in self._dirblocks:
1498
for entry in directory[1]:
1501
def _get_id_index(self):
1502
"""Get an id index of self._dirblocks."""
1503
if self._id_index is None:
1505
for key, tree_details in self._iter_entries():
1506
id_index.setdefault(key[2], set()).add(key)
1507
self._id_index = id_index
1508
return self._id_index
1510
def _get_output_lines(self, lines):
1511
"""format lines for final output.
1513
:param lines: A sequece of lines containing the parents list and the
1516
output_lines = [DirState.HEADER_FORMAT_3]
1517
lines.append('') # a final newline
1518
inventory_text = '\0\n\0'.join(lines)
1519
output_lines.append('crc32: %s\n' % (zlib.crc32(inventory_text),))
1520
# -3, 1 for num parents, 1 for ghosts, 1 for final newline
1521
num_entries = len(lines)-3
1522
output_lines.append('num_entries: %s\n' % (num_entries,))
1523
output_lines.append(inventory_text)
1526
def _make_deleted_row(self, fileid_utf8, parents):
1527
"""Return a deleted for for fileid_utf8."""
1528
return ('/', 'RECYCLED.BIN', 'file', fileid_utf8, 0, DirState.NULLSTAT,
1531
def _num_present_parents(self):
1532
"""The number of parent entries in each record row."""
1533
return len(self._parents) - len(self._ghosts)
1537
"""Construct a DirState on the file at path path.
1539
:return: An unlocked DirState object, associated with the given path.
1541
result = DirState(path)
1544
def _read_dirblocks_if_needed(self):
1545
"""Read in all the dirblocks from the file if they are not in memory.
1547
This populates self._dirblocks, and sets self._dirblock_state to
1548
IN_MEMORY_UNMODIFIED. It is not currently ready for incremental block
1551
self._read_header_if_needed()
1552
if self._dirblock_state == DirState.NOT_IN_MEMORY:
1553
# move the _state_file pointer to after the header (in case bisect
1554
# has been called in the mean time)
1555
self._state_file.seek(self._end_of_header)
1556
text = self._state_file.read()
1557
# TODO: check the crc checksums. crc_measured = zlib.crc32(text)
1559
fields = text.split('\0')
1560
# Remove the last blank entry
1561
trailing = fields.pop()
1562
assert trailing == ''
1563
# consider turning fields into a tuple.
1565
# skip the first field which is the trailing null from the header.
1567
# Each line now has an extra '\n' field which is not used
1568
# so we just skip over it
1570
# 3 fields for the key
1571
# + number of fields per tree_data (5) * tree count
1573
num_present_parents = self._num_present_parents()
1574
tree_count = 1 + num_present_parents
1575
entry_size = self._fields_per_entry()
1576
expected_field_count = entry_size * self._num_entries
1577
field_count = len(fields)
1578
# this checks our adjustment, and also catches file too short.
1579
assert field_count - cur == expected_field_count, \
1580
'field count incorrect %s != %s, entry_size=%s, '\
1581
'num_entries=%s fields=%r' % (
1582
field_count - cur, expected_field_count, entry_size,
1583
self._num_entries, fields)
1585
if num_present_parents == 1:
1586
# Bind external functions to local names
1588
# We access all fields in order, so we can just iterate over
1589
# them. Grab an straight iterator over the fields. (We use an
1590
# iterator because we don't want to do a lot of additions, nor
1591
# do we want to do a lot of slicing)
1592
next = iter(fields).next
1593
# Move the iterator to the current position
1594
for x in xrange(cur):
1596
# The two blocks here are deliberate: the root block and the
1597
# contents-of-root block.
1598
self._dirblocks = [('', []), ('', [])]
1599
current_block = self._dirblocks[0][1]
1600
current_dirname = ''
1601
append_entry = current_block.append
1602
for count in xrange(self._num_entries):
1606
if dirname != current_dirname:
1607
# new block - different dirname
1609
current_dirname = dirname
1610
self._dirblocks.append((current_dirname, current_block))
1611
append_entry = current_block.append
1612
# we know current_dirname == dirname, so re-use it to avoid
1613
# creating new strings
1614
entry = ((current_dirname, name, file_id),
1617
next(), # fingerprint
1618
_int(next()), # size
1619
next() == 'y', # executable
1620
next(), # packed_stat or revision_id
1624
next(), # fingerprint
1625
_int(next()), # size
1626
next() == 'y', # executable
1627
next(), # packed_stat or revision_id
1631
assert trailing == '\n'
1632
# append the entry to the current block
1634
self._split_root_dirblock_into_contents()
1636
fields_to_entry = self._get_fields_to_entry()
1637
entries = [fields_to_entry(fields[pos:pos+entry_size])
1638
for pos in xrange(cur, field_count, entry_size)]
1639
self._entries_to_current_state(entries)
1640
# To convert from format 2 => format 3
1641
# self._dirblocks = sorted(self._dirblocks,
1642
# key=lambda blk:blk[0].split('/'))
1643
# To convert from format 3 => format 2
1644
# self._dirblocks = sorted(self._dirblocks)
1645
self._dirblock_state = DirState.IN_MEMORY_UNMODIFIED
1647
def _read_header(self):
1648
"""This reads in the metadata header, and the parent ids.
1650
After reading in, the file should be positioned at the null
1651
just before the start of the first record in the file.
1653
:return: (expected crc checksum, number of entries, parent list)
1655
self._read_prelude()
1656
parent_line = self._state_file.readline()
1657
info = parent_line.split('\0')
1658
num_parents = int(info[0])
1659
assert num_parents == len(info)-2, 'incorrect parent info line'
1660
self._parents = info[1:-1]
1662
ghost_line = self._state_file.readline()
1663
info = ghost_line.split('\0')
1664
num_ghosts = int(info[1])
1665
assert num_ghosts == len(info)-3, 'incorrect ghost info line'
1666
self._ghosts = info[2:-1]
1667
self._header_state = DirState.IN_MEMORY_UNMODIFIED
1668
self._end_of_header = self._state_file.tell()
1670
def _read_header_if_needed(self):
1671
"""Read the header of the dirstate file if needed."""
1672
# inline this as it will be called a lot
1673
if not self._lock_token:
1674
raise errors.ObjectNotLocked(self)
1675
if self._header_state == DirState.NOT_IN_MEMORY:
1678
def _read_prelude(self):
1679
"""Read in the prelude header of the dirstate file
1681
This only reads in the stuff that is not connected to the crc
1682
checksum. The position will be correct to read in the rest of
1683
the file and check the checksum after this point.
1684
The next entry in the file should be the number of parents,
1685
and their ids. Followed by a newline.
1687
header = self._state_file.readline()
1688
assert header == DirState.HEADER_FORMAT_3, \
1689
'invalid header line: %r' % (header,)
1690
crc_line = self._state_file.readline()
1691
assert crc_line.startswith('crc32: '), 'missing crc32 checksum'
1692
self.crc_expected = int(crc_line[len('crc32: '):-1])
1693
num_entries_line = self._state_file.readline()
1694
assert num_entries_line.startswith('num_entries: '), 'missing num_entries line'
1695
self._num_entries = int(num_entries_line[len('num_entries: '):-1])
1698
"""Save any pending changes created during this session.
1700
We reuse the existing file, because that prevents race conditions with
1701
file creation, and use oslocks on it to prevent concurrent modification
1702
and reads - because dirstates incremental data aggretation is not
1703
compatible with reading a modified file, and replacing a file in use by
1704
another process is impossible on windows.
1706
A dirstate in read only mode should be smart enough though to validate
1707
that the file has not changed, and otherwise discard its cache and
1708
start over, to allow for fine grained read lock duration, so 'status'
1709
wont block 'commit' - for example.
1711
if (self._header_state == DirState.IN_MEMORY_MODIFIED or
1712
self._dirblock_state == DirState.IN_MEMORY_MODIFIED):
1714
grabbed_write_lock = False
1715
if self._lock_state != 'w':
1716
grabbed_write_lock, new_lock = self._lock_token.temporary_write_lock()
1717
# Switch over to the new lock, as the old one may be closed.
1718
# TODO: jam 20070315 We should validate the disk file has
1719
# not changed contents. Since temporary_write_lock may
1720
# not be an atomic operation.
1721
self._lock_token = new_lock
1722
self._state_file = new_lock.f
1723
if not grabbed_write_lock:
1724
# We couldn't grab a write lock, so we switch back to a read one
1727
self._state_file.seek(0)
1728
self._state_file.writelines(self.get_lines())
1729
self._state_file.truncate()
1730
self._state_file.flush()
1731
self._header_state = DirState.IN_MEMORY_UNMODIFIED
1732
self._dirblock_state = DirState.IN_MEMORY_UNMODIFIED
1734
if grabbed_write_lock:
1735
self._lock_token = self._lock_token.restore_read_lock()
1736
self._state_file = self._lock_token.f
1737
# TODO: jam 20070315 We should validate the disk file has
1738
# not changed contents. Since restore_read_lock may
1739
# not be an atomic operation.
1741
def _set_data(self, parent_ids, dirblocks):
1742
"""Set the full dirstate data in memory.
1744
This is an internal function used to completely replace the objects
1745
in memory state. It puts the dirstate into state 'full-dirty'.
1747
:param parent_ids: A list of parent tree revision ids.
1748
:param dirblocks: A list containing one tuple for each directory in the
1749
tree. Each tuple contains the directory path and a list of entries
1750
found in that directory.
1752
# our memory copy is now authoritative.
1753
self._dirblocks = dirblocks
1754
self._header_state = DirState.IN_MEMORY_MODIFIED
1755
self._dirblock_state = DirState.IN_MEMORY_MODIFIED
1756
self._parents = list(parent_ids)
1757
self._id_index = None
1759
def set_path_id(self, path, new_id):
1760
"""Change the id of path to new_id in the current working tree.
1762
:param path: The path inside the tree to set - '' is the root, 'foo'
1763
is the path foo in the root.
1764
:param new_id: The new id to assign to the path. This must be a utf8
1765
file id (not unicode, and not None).
1767
assert new_id.__class__ == str, \
1768
"path_id %r is not a plain string" % (new_id,)
1769
self._read_dirblocks_if_needed()
1772
raise NotImplementedError(self.set_path_id)
1773
# TODO: check new id is unique
1774
entry = self._get_entry(0, path_utf8=path)
1775
if entry[0][2] == new_id:
1776
# Nothing to change.
1778
# mark the old path absent, and insert a new root path
1779
self._make_absent(entry)
1780
self.update_minimal(('', '', new_id), 'd',
1781
path_utf8='', packed_stat=entry[1][0][4])
1782
self._dirblock_state = DirState.IN_MEMORY_MODIFIED
1783
if self._id_index is not None:
1784
self._id_index.setdefault(new_id, set()).add(entry[0])
1786
def set_parent_trees(self, trees, ghosts):
1787
"""Set the parent trees for the dirstate.
1789
:param trees: A list of revision_id, tree tuples. tree must be provided
1790
even if the revision_id refers to a ghost: supply an empty tree in
1792
:param ghosts: A list of the revision_ids that are ghosts at the time
1795
# TODO: generate a list of parent indexes to preserve to save
1796
# processing specific parent trees. In the common case one tree will
1797
# be preserved - the left most parent.
1798
# TODO: if the parent tree is a dirstate, we might want to walk them
1799
# all by path in parallel for 'optimal' common-case performance.
1800
# generate new root row.
1801
self._read_dirblocks_if_needed()
1802
# TODO future sketch: Examine the existing parents to generate a change
1803
# map and then walk the new parent trees only, mapping them into the
1804
# dirstate. Walk the dirstate at the same time to remove unreferenced
1807
# sketch: loop over all entries in the dirstate, cherry picking
1808
# entries from the parent trees, if they are not ghost trees.
1809
# after we finish walking the dirstate, all entries not in the dirstate
1810
# are deletes, so we want to append them to the end as per the design
1811
# discussions. So do a set difference on ids with the parents to
1812
# get deletes, and add them to the end.
1813
# During the update process we need to answer the following questions:
1814
# - find other keys containing a fileid in order to create cross-path
1815
# links. We dont't trivially use the inventory from other trees
1816
# because this leads to either double touching, or to accessing
1818
# - find other keys containing a path
1819
# We accumulate each entry via this dictionary, including the root
1822
# we could do parallel iterators, but because file id data may be
1823
# scattered throughout, we dont save on index overhead: we have to look
1824
# at everything anyway. We can probably save cycles by reusing parent
1825
# data and doing an incremental update when adding an additional
1826
# parent, but for now the common cases are adding a new parent (merge),
1827
# and replacing completely (commit), and commit is more common: so
1828
# optimise merge later.
1830
# ---- start generation of full tree mapping data
1831
# what trees should we use?
1832
parent_trees = [tree for rev_id, tree in trees if rev_id not in ghosts]
1833
# how many trees do we end up with
1834
parent_count = len(parent_trees)
1836
# one: the current tree
1837
for entry in self._iter_entries():
1838
# skip entries not in the current tree
1839
if entry[1][0][0] in ('a', 'r'): # absent, relocated
1841
by_path[entry[0]] = [entry[1][0]] + \
1842
[DirState.NULL_PARENT_DETAILS] * parent_count
1843
id_index[entry[0][2]] = set([entry[0]])
1845
# now the parent trees:
1846
for tree_index, tree in enumerate(parent_trees):
1847
# the index is off by one, adjust it.
1848
tree_index = tree_index + 1
1849
# when we add new locations for a fileid we need these ranges for
1850
# any fileid in this tree as we set the by_path[id] to:
1851
# already_processed_tree_details + new_details + new_location_suffix
1852
# the suffix is from tree_index+1:parent_count+1.
1853
new_location_suffix = [DirState.NULL_PARENT_DETAILS] * (parent_count - tree_index)
1854
# now stitch in all the entries from this tree
1855
for path, entry in tree.inventory.iter_entries_by_dir():
1856
# here we process each trees details for each item in the tree.
1857
# we first update any existing entries for the id at other paths,
1858
# then we either create or update the entry for the id at the
1859
# right path, and finally we add (if needed) a mapping from
1860
# file_id to this path. We do it in this order to allow us to
1861
# avoid checking all known paths for the id when generating a
1862
# new entry at this path: by adding the id->path mapping last,
1863
# all the mappings are valid and have correct relocation
1864
# records where needed.
1865
file_id = entry.file_id
1866
path_utf8 = path.encode('utf8')
1867
dirname, basename = osutils.split(path_utf8)
1868
new_entry_key = (dirname, basename, file_id)
1869
# tree index consistency: All other paths for this id in this tree
1870
# index must point to the correct path.
1871
for entry_key in id_index.setdefault(file_id, set()):
1872
# TODO:PROFILING: It might be faster to just update
1873
# rather than checking if we need to, and then overwrite
1874
# the one we are located at.
1875
if entry_key != new_entry_key:
1876
# this file id is at a different path in one of the
1877
# other trees, so put absent pointers there
1878
# This is the vertical axis in the matrix, all pointing
1880
by_path[entry_key][tree_index] = ('r', path_utf8, 0, False, '')
1881
# by path consistency: Insert into an existing path record (trivial), or
1882
# add a new one with relocation pointers for the other tree indexes.
1883
if new_entry_key in id_index[file_id]:
1884
# there is already an entry where this data belongs, just insert it.
1885
by_path[new_entry_key][tree_index] = \
1886
self._inv_entry_to_details(entry)
1888
# add relocated entries to the horizontal axis - this row
1889
# mapping from path,id. We need to look up the correct path
1890
# for the indexes from 0 to tree_index -1
1892
for lookup_index in xrange(tree_index):
1893
# boundary case: this is the first occurence of file_id
1894
# so there are no id_indexs, possibly take this out of
1896
if not len(id_index[file_id]):
1897
new_details.append(DirState.NULL_PARENT_DETAILS)
1899
# grab any one entry, use it to find the right path.
1900
# TODO: optimise this to reduce memory use in highly
1901
# fragmented situations by reusing the relocation
1903
a_key = iter(id_index[file_id]).next()
1904
if by_path[a_key][lookup_index][0] in ('r', 'a'):
1905
# its a pointer or missing statement, use it as is.
1906
new_details.append(by_path[a_key][lookup_index])
1908
# we have the right key, make a pointer to it.
1909
real_path = ('/'.join(a_key[0:2])).strip('/')
1910
new_details.append(('r', real_path, 0, False, ''))
1911
new_details.append(self._inv_entry_to_details(entry))
1912
new_details.extend(new_location_suffix)
1913
by_path[new_entry_key] = new_details
1914
id_index[file_id].add(new_entry_key)
1915
# --- end generation of full tree mappings
1917
# sort and output all the entries
1918
new_entries = self._sort_entries(by_path.items())
1919
self._entries_to_current_state(new_entries)
1920
self._parents = [rev_id for rev_id, tree in trees]
1921
self._ghosts = list(ghosts)
1922
self._header_state = DirState.IN_MEMORY_MODIFIED
1923
self._dirblock_state = DirState.IN_MEMORY_MODIFIED
1924
self._id_index = id_index
1926
def _sort_entries(self, entry_list):
1927
"""Given a list of entries, sort them into the right order.
1929
This is done when constructing a new dirstate from trees - normally we
1930
try to keep everything in sorted blocks all the time, but sometimes
1931
it's easier to sort after the fact.
1933
# TODO: Might be faster to do a schwartzian transform?
1935
# sort by: directory parts, file name, file id
1936
return entry[0][0].split('/'), entry[0][1], entry[0][2]
1937
return sorted(entry_list, key=_key)
1939
def set_state_from_inventory(self, new_inv):
1940
"""Set new_inv as the current state.
1942
This API is called by tree transform, and will usually occur with
1943
existing parent trees.
1945
:param new_inv: The inventory object to set current state from.
1947
self._read_dirblocks_if_needed()
1949
# incremental algorithm:
1950
# two iterators: current data and new data, both in dirblock order.
1951
new_iterator = new_inv.iter_entries_by_dir()
1952
# we will be modifying the dirstate, so we need a stable iterator. In
1953
# future we might write one, for now we just clone the state into a
1954
# list - which is a shallow copy, so each
1955
old_iterator = iter(list(self._iter_entries()))
1956
# both must have roots so this is safe:
1957
current_new = new_iterator.next()
1958
current_old = old_iterator.next()
1959
def advance(iterator):
1961
return iterator.next()
1962
except StopIteration:
1964
while current_new or current_old:
1965
# skip entries in old that are not really there
1966
if current_old and current_old[1][0][0] in ('r', 'a'):
1967
# relocated or absent
1968
current_old = advance(old_iterator)
1971
# convert new into dirblock style
1972
new_path_utf8 = current_new[0].encode('utf8')
1973
new_dirname, new_basename = osutils.split(new_path_utf8)
1974
new_id = current_new[1].file_id
1975
new_entry_key = (new_dirname, new_basename, new_id)
1976
current_new_minikind = \
1977
DirState._kind_to_minikind[current_new[1].kind]
1978
if current_new_minikind == 't':
1979
fingerprint = current_new[1].reference_revision
1983
# for safety disable variables
1984
new_path_utf8 = new_dirname = new_basename = new_id = new_entry_key = None
1985
# 5 cases, we dont have a value that is strictly greater than everything, so
1986
# we make both end conditions explicit
1988
# old is finished: insert current_new into the state.
1989
self.update_minimal(new_entry_key, current_new_minikind,
1990
executable=current_new[1].executable,
1991
path_utf8=new_path_utf8, fingerprint=fingerprint)
1992
current_new = advance(new_iterator)
1993
elif not current_new:
1995
self._make_absent(current_old)
1996
current_old = advance(old_iterator)
1997
elif new_entry_key == current_old[0]:
1998
# same - common case
1999
# TODO: update the record if anything significant has changed.
2000
# the minimal required trigger is if the execute bit or cached
2002
if (current_old[1][0][3] != current_new[1].executable or
2003
current_old[1][0][0] != current_new_minikind):
2004
self.update_minimal(current_old[0], current_new_minikind,
2005
executable=current_new[1].executable,
2006
path_utf8=new_path_utf8, fingerprint=fingerprint)
2007
# both sides are dealt with, move on
2008
current_old = advance(old_iterator)
2009
current_new = advance(new_iterator)
2010
elif (new_entry_key[0].split('/') < current_old[0][0].split('/')
2011
and new_entry_key[1:] < current_old[0][1:]):
2013
# add a entry for this and advance new
2014
self.update_minimal(new_entry_key, current_new_minikind,
2015
executable=current_new[1].executable,
2016
path_utf8=new_path_utf8, fingerprint=fingerprint)
2017
current_new = advance(new_iterator)
2020
self._make_absent(current_old)
2021
current_old = advance(old_iterator)
2022
self._dirblock_state = DirState.IN_MEMORY_MODIFIED
2023
self._id_index = None
2025
def _make_absent(self, current_old):
2026
"""Mark current_old - an entry - as absent for tree 0.
2028
:return: True if this was the last details entry for they entry key:
2029
that is, if the underlying block has had the entry removed, thus
2030
shrinking in length.
2032
# build up paths that this id will be left at after the change is made,
2033
# so we can update their cross references in tree 0
2034
all_remaining_keys = set()
2035
# Dont check the working tree, because its going.
2036
for details in current_old[1][1:]:
2037
if details[0] not in ('a', 'r'): # absent, relocated
2038
all_remaining_keys.add(current_old[0])
2039
elif details[0] == 'r': # relocated
2040
# record the key for the real path.
2041
all_remaining_keys.add(tuple(osutils.split(details[1])) + (current_old[0][2],))
2042
# absent rows are not present at any path.
2043
last_reference = current_old[0] not in all_remaining_keys
2045
# the current row consists entire of the current item (being marked
2046
# absent), and relocated or absent entries for the other trees:
2047
# Remove it, its meaningless.
2048
block = self._find_block(current_old[0])
2049
entry_index, present = self._find_entry_index(current_old[0], block[1])
2050
assert present, 'could not find entry for %s' % (current_old,)
2051
block[1].pop(entry_index)
2052
# if we have an id_index in use, remove this key from it for this id.
2053
if self._id_index is not None:
2054
self._id_index[current_old[0][2]].remove(current_old[0])
2055
# update all remaining keys for this id to record it as absent. The
2056
# existing details may either be the record we are making as deleted
2057
# (if there were other trees with the id present at this path), or may
2059
for update_key in all_remaining_keys:
2060
update_block_index, present = \
2061
self._find_block_index_from_key(update_key)
2062
assert present, 'could not find block for %s' % (update_key,)
2063
update_entry_index, present = \
2064
self._find_entry_index(update_key, self._dirblocks[update_block_index][1])
2065
assert present, 'could not find entry for %s' % (update_key,)
2066
update_tree_details = self._dirblocks[update_block_index][1][update_entry_index][1]
2067
# it must not be absent at the moment
2068
assert update_tree_details[0][0] != 'a' # absent
2069
update_tree_details[0] = DirState.NULL_PARENT_DETAILS
2070
self._dirblock_state = DirState.IN_MEMORY_MODIFIED
2071
return last_reference
2073
def update_minimal(self, key, minikind, executable=False, fingerprint='',
2074
packed_stat=None, size=0, path_utf8=None):
2075
"""Update an entry to the state in tree 0.
2077
This will either create a new entry at 'key' or update an existing one.
2078
It also makes sure that any other records which might mention this are
2081
:param key: (dir, name, file_id) for the new entry
2082
:param minikind: The type for the entry ('f' == 'file', 'd' ==
2084
:param executable: Should the executable bit be set?
2085
:param fingerprint: Simple fingerprint for new entry.
2086
:param packed_stat: packed stat value for new entry.
2087
:param size: Size information for new entry
2088
:param path_utf8: key[0] + '/' + key[1], just passed in to avoid doing
2091
block = self._find_block(key)[1]
2092
if packed_stat is None:
2093
packed_stat = DirState.NULLSTAT
2094
entry_index, present = self._find_entry_index(key, block)
2095
new_details = (minikind, fingerprint, size, executable, packed_stat)
2096
id_index = self._get_id_index()
2098
# new entry, synthesis cross reference here,
2099
existing_keys = id_index.setdefault(key[2], set())
2100
if not existing_keys:
2101
# not currently in the state, simplest case
2102
new_entry = key, [new_details] + self._empty_parent_info()
2104
# present at one or more existing other paths.
2105
# grab one of them and use it to generate parent
2106
# relocation/absent entries.
2107
new_entry = key, [new_details]
2108
for other_key in existing_keys:
2109
# change the record at other to be a pointer to this new
2110
# record. The loop looks similar to the change to
2111
# relocations when updating an existing record but its not:
2112
# the test for existing kinds is different: this can be
2113
# factored out to a helper though.
2114
other_block_index, present = self._find_block_index_from_key(other_key)
2115
assert present, 'could not find block for %s' % (other_key,)
2116
other_entry_index, present = self._find_entry_index(other_key,
2117
self._dirblocks[other_block_index][1])
2118
assert present, 'could not find entry for %s' % (other_key,)
2119
assert path_utf8 is not None
2120
self._dirblocks[other_block_index][1][other_entry_index][1][0] = \
2121
('r', path_utf8, 0, False, '')
2123
num_present_parents = self._num_present_parents()
2124
for lookup_index in xrange(1, num_present_parents + 1):
2125
# grab any one entry, use it to find the right path.
2126
# TODO: optimise this to reduce memory use in highly
2127
# fragmented situations by reusing the relocation
2129
update_block_index, present = \
2130
self._find_block_index_from_key(other_key)
2131
assert present, 'could not find block for %s' % (other_key,)
2132
update_entry_index, present = \
2133
self._find_entry_index(other_key, self._dirblocks[update_block_index][1])
2134
assert present, 'could not find entry for %s' % (other_key,)
2135
update_details = self._dirblocks[update_block_index][1][update_entry_index][1][lookup_index]
2136
if update_details[0] in ('r', 'a'): # relocated, absent
2137
# its a pointer or absent in lookup_index's tree, use
2139
new_entry[1].append(update_details)
2141
# we have the right key, make a pointer to it.
2142
pointer_path = osutils.pathjoin(*other_key[0:2])
2143
new_entry[1].append(('r', pointer_path, 0, False, ''))
2144
block.insert(entry_index, new_entry)
2145
existing_keys.add(key)
2147
# Does the new state matter?
2148
block[entry_index][1][0] = new_details
2149
# parents cannot be affected by what we do.
2150
# other occurences of this id can be found
2151
# from the id index.
2153
# tree index consistency: All other paths for this id in this tree
2154
# index must point to the correct path. We have to loop here because
2155
# we may have passed entries in the state with this file id already
2156
# that were absent - where parent entries are - and they need to be
2157
# converted to relocated.
2158
assert path_utf8 is not None
2159
for entry_key in id_index.setdefault(key[2], set()):
2160
# TODO:PROFILING: It might be faster to just update
2161
# rather than checking if we need to, and then overwrite
2162
# the one we are located at.
2163
if entry_key != key:
2164
# this file id is at a different path in one of the
2165
# other trees, so put absent pointers there
2166
# This is the vertical axis in the matrix, all pointing
2168
block_index, present = self._find_block_index_from_key(entry_key)
2170
entry_index, present = self._find_entry_index(entry_key, self._dirblocks[block_index][1])
2172
self._dirblocks[block_index][1][entry_index][1][0] = \
2173
('r', path_utf8, 0, False, '')
2174
# add a containing dirblock if needed.
2175
if new_details[0] == 'd':
2176
subdir_key = (osutils.pathjoin(*key[0:2]), '', '')
2177
block_index, present = self._find_block_index_from_key(subdir_key)
2179
self._dirblocks.insert(block_index, (subdir_key[0], []))
2181
self._dirblock_state = DirState.IN_MEMORY_MODIFIED
2183
def _validate(self):
2184
"""Check that invariants on the dirblock are correct.
2186
This can be useful in debugging; it shouldn't be necessary in
2189
This must be called with a lock held.
2191
# NOTE: This must always raise AssertionError not just assert,
2192
# otherwise it may not behave properly under python -O
2194
# TODO: All entries must have some content that's not 'a' or 'r',
2195
# otherwise it could just be removed.
2197
# TODO: All relocations must point directly to a real entry.
2199
# TODO: No repeated keys.
2202
from pprint import pformat
2203
self._read_dirblocks_if_needed()
2204
if len(self._dirblocks) > 0:
2205
if not self._dirblocks[0][0] == '':
2206
raise AssertionError(
2207
"dirblocks don't start with root block:\n" + \
2209
if len(self._dirblocks) > 1:
2210
if not self._dirblocks[1][0] == '':
2211
raise AssertionError(
2212
"dirblocks missing root directory:\n" + \
2214
# the dirblocks are sorted by their path components, name, and dir id
2215
dir_names = [d[0].split('/')
2216
for d in self._dirblocks[1:]]
2217
if dir_names != sorted(dir_names):
2218
raise AssertionError(
2219
"dir names are not in sorted order:\n" + \
2220
pformat(self._dirblocks) + \
2223
for dirblock in self._dirblocks:
2224
# within each dirblock, the entries are sorted by filename and
2226
for entry in dirblock[1]:
2227
if dirblock[0] != entry[0][0]:
2228
raise AssertionError(
2230
"doesn't match directory name in\n%r" %
2231
(entry, pformat(dirblock)))
2232
if dirblock[1] != sorted(dirblock[1]):
2233
raise AssertionError(
2234
"dirblock for %r is not sorted:\n%s" % \
2235
(dirblock[0], pformat(dirblock)))
2238
def check_valid_parent():
2239
"""Check that the current entry has a valid parent.
2241
This makes sure that the parent has a record,
2242
and that the parent isn't marked as "absent" in the
2243
current tree. (It is invalid to have a non-absent file in an absent
2246
if entry[0][0:2] == ('', ''):
2247
# There should be no parent for the root row
2249
parent_entry = self._get_entry(tree_index, path_utf8=entry[0][0])
2250
if parent_entry == (None, None):
2251
raise AssertionError(
2252
"no parent entry for: %s in tree %s"
2253
% (this_path, tree_index))
2254
if parent_entry[1][tree_index][0] != 'd':
2255
raise AssertionError(
2256
"Parent entry for %s is not marked as a valid"
2257
" directory. %s" % (this_path, parent_entry,))
2259
# For each file id, for each tree: either
2260
# the file id is not present at all; all rows with that id in the
2261
# key have it marked as 'absent'
2262
# OR the file id is present under exactly one name; any other entries
2263
# that mention that id point to the correct name.
2265
# We check this with a dict per tree pointing either to the present
2266
# name, or None if absent.
2267
tree_count = self._num_present_parents() + 1
2268
id_path_maps = [dict() for i in range(tree_count)]
2269
# Make sure that all renamed entries point to the correct location.
2270
for entry in self._iter_entries():
2271
file_id = entry[0][2]
2272
this_path = osutils.pathjoin(entry[0][0], entry[0][1])
2273
if len(entry[1]) != tree_count:
2274
raise AssertionError(
2275
"wrong number of entry details for row\n%s" \
2276
",\nexpected %d" % \
2277
(pformat(entry), tree_count))
2278
for tree_index, tree_state in enumerate(entry[1]):
2279
this_tree_map = id_path_maps[tree_index]
2280
minikind = tree_state[0]
2281
# have we seen this id before in this column?
2282
if file_id in this_tree_map:
2283
previous_path = this_tree_map[file_id]
2284
# any later mention of this file must be consistent with
2285
# what was said before
2287
if previous_path is not None:
2288
raise AssertionError(
2289
"file %s is absent in row %r but also present " \
2291
(file_id, entry, previous_path))
2292
elif minikind == 'r':
2293
target_location = tree_state[1]
2294
if previous_path != target_location:
2295
raise AssertionError(
2296
"file %s relocation in row %r but also at %r" \
2297
% (file_id, entry, previous_path))
2299
# a file, directory, etc - may have been previously
2300
# pointed to by a relocation, which must point here
2301
if previous_path != this_path:
2302
raise AssertionError(
2303
"entry %r inconsistent with previous path %r" % \
2304
(entry, previous_path))
2305
check_valid_parent()
2308
# absent; should not occur anywhere else
2309
this_tree_map[file_id] = None
2310
elif minikind == 'r':
2311
# relocation, must occur at expected location
2312
this_tree_map[file_id] = tree_state[1]
2314
this_tree_map[file_id] = this_path
2315
check_valid_parent()
2317
def _wipe_state(self):
2318
"""Forget all state information about the dirstate."""
2319
self._header_state = DirState.NOT_IN_MEMORY
2320
self._dirblock_state = DirState.NOT_IN_MEMORY
2323
self._dirblocks = []
2324
self._id_index = None
2325
self._end_of_header = None
2326
self._cutoff_time = None
2327
self._split_path_cache = {}
2329
def lock_read(self):
2330
"""Acquire a read lock on the dirstate"""
2331
if self._lock_token is not None:
2332
raise errors.LockContention(self._lock_token)
2333
# TODO: jam 20070301 Rather than wiping completely, if the blocks are
2334
# already in memory, we could read just the header and check for
2335
# any modification. If not modified, we can just leave things
2337
self._lock_token = lock.ReadLock(self._filename)
2338
self._lock_state = 'r'
2339
self._state_file = self._lock_token.f
2342
def lock_write(self):
2343
"""Acquire a write lock on the dirstate"""
2344
if self._lock_token is not None:
2345
raise errors.LockContention(self._lock_token)
2346
# TODO: jam 20070301 Rather than wiping completely, if the blocks are
2347
# already in memory, we could read just the header and check for
2348
# any modification. If not modified, we can just leave things
2350
self._lock_token = lock.WriteLock(self._filename)
2351
self._lock_state = 'w'
2352
self._state_file = self._lock_token.f
2356
"""Drop any locks held on the dirstate"""
2357
if self._lock_token is None:
2358
raise errors.LockNotHeld(self)
2359
# TODO: jam 20070301 Rather than wiping completely, if the blocks are
2360
# already in memory, we could read just the header and check for
2361
# any modification. If not modified, we can just leave things
2363
self._state_file = None
2364
self._lock_state = None
2365
self._lock_token.unlock()
2366
self._lock_token = None
2367
self._split_path_cache = {}
2369
def _requires_lock(self):
2370
"""Checks that a lock is currently held by someone on the dirstate"""
2371
if not self._lock_token:
2372
raise errors.ObjectNotLocked(self)
2375
def bisect_dirblock(dirblocks, dirname, lo=0, hi=None, cache={}):
2376
"""Return the index where to insert dirname into the dirblocks.
2378
The return value idx is such that all directories blocks in dirblock[:idx]
2379
have names < dirname, and all blocks in dirblock[idx:] have names >=
2382
Optional args lo (default 0) and hi (default len(dirblocks)) bound the
2383
slice of a to be searched.
2388
dirname_split = cache[dirname]
2390
dirname_split = dirname.split('/')
2391
cache[dirname] = dirname_split
2394
# Grab the dirname for the current dirblock
2395
cur = dirblocks[mid][0]
2397
cur_split = cache[cur]
2399
cur_split = cur.split('/')
2400
cache[cur] = cur_split
2401
if cur_split < dirname_split: lo = mid+1