20
20
lines by NL. The field delimiters are ommitted in the grammar, line delimiters
21
21
are not - this is done for clarity of reading. All string data is in utf8.
25
MINIKIND = "f" | "d" | "l" | "a" | "r" | "t";
28
WHOLE_NUMBER = {digit}, digit;
30
REVISION_ID = a non-empty utf8 string;
32
dirstate format = header line, full checksum, row count, parent details,
33
ghost_details, entries;
34
header line = "#bazaar dirstate flat format 3", NL;
35
full checksum = "crc32: ", ["-"], WHOLE_NUMBER, NL;
36
row count = "num_entries: ", WHOLE_NUMBER, NL;
37
parent_details = WHOLE NUMBER, {REVISION_ID}* NL;
38
ghost_details = WHOLE NUMBER, {REVISION_ID}*, NL;
40
entry = entry_key, current_entry_details, {parent_entry_details};
41
entry_key = dirname, basename, fileid;
42
current_entry_details = common_entry_details, working_entry_details;
43
parent_entry_details = common_entry_details, history_entry_details;
44
common_entry_details = MINIKIND, fingerprint, size, executable
45
working_entry_details = packed_stat
46
history_entry_details = REVISION_ID;
49
fingerprint = a nonempty utf8 sequence with meaning defined by minikind.
51
Given this definition, the following is useful to know::
53
entry (aka row) - all the data for a given key.
54
entry[0]: The key (dirname, basename, fileid)
58
entry[1]: The tree(s) data for this path and id combination.
59
entry[1][0]: The current tree
60
entry[1][1]: The second tree
62
For an entry for a tree, we have (using tree 0 - current tree) to demonstrate::
64
entry[1][0][0]: minikind
65
entry[1][0][1]: fingerprint
67
entry[1][0][3]: executable
68
entry[1][0][4]: packed_stat
72
entry[1][1][4]: revision_id
23
MINIKIND = "f" | "d" | "l" | "a" | "r" | "t";
26
WHOLE_NUMBER = {digit}, digit;
28
REVISION_ID = a non-empty utf8 string;
30
dirstate format = header line, full checksum, row count, parent details,
31
ghost_details, entries;
32
header line = "#bazaar dirstate flat format 3", NL;
33
full checksum = "crc32: ", ["-"], WHOLE_NUMBER, NL;
34
row count = "num_entries: ", WHOLE_NUMBER, NL;
35
parent_details = WHOLE NUMBER, {REVISION_ID}* NL;
36
ghost_details = WHOLE NUMBER, {REVISION_ID}*, NL;
38
entry = entry_key, current_entry_details, {parent_entry_details};
39
entry_key = dirname, basename, fileid;
40
current_entry_details = common_entry_details, working_entry_details;
41
parent_entry_details = common_entry_details, history_entry_details;
42
common_entry_details = MINIKIND, fingerprint, size, executable
43
working_entry_details = packed_stat
44
history_entry_details = REVISION_ID;
47
fingerprint = a nonempty utf8 sequence with meaning defined by minikind.
49
Given this definition, the following is useful to know:
50
entry (aka row) - all the data for a given key.
51
entry[0]: The key (dirname, basename, fileid)
55
entry[1]: The tree(s) data for this path and id combination.
56
entry[1][0]: The current tree
57
entry[1][1]: The second tree
59
For an entry for a tree, we have (using tree 0 - current tree) to demonstrate:
60
entry[1][0][0]: minikind
61
entry[1][0][1]: fingerprint
63
entry[1][0][3]: executable
64
entry[1][0][4]: packed_stat
66
entry[1][1][4]: revision_id
74
68
There may be multiple rows at the root, one per id present in the root, so the
75
in memory root row is now::
77
self._dirblocks[0] -> ('', [entry ...]),
79
and the entries in there are::
83
entries[0][2]: file_id
84
entries[1][0]: The tree data for the current tree for this fileid at /
89
'r' is a relocated entry: This path is not present in this tree with this
90
id, but the id can be found at another location. The fingerprint is
91
used to point to the target location.
92
'a' is an absent entry: In that tree the id is not present at this path.
93
'd' is a directory entry: This path in this tree is a directory with the
94
current file id. There is no fingerprint for directories.
95
'f' is a file entry: As for directory, but it's a file. The fingerprint is
96
the sha1 value of the file's canonical form, i.e. after any read
97
filters have been applied to the convenience form stored in the working
99
'l' is a symlink entry: As for directory, but a symlink. The fingerprint is
101
't' is a reference to a nested subtree; the fingerprint is the referenced
69
in memory root row is now:
70
self._dirblocks[0] -> ('', [entry ...]),
71
and the entries in there are
74
entries[0][2]: file_id
75
entries[1][0]: The tree data for the current tree for this fileid at /
79
'r' is a relocated entry: This path is not present in this tree with this id,
80
but the id can be found at another location. The fingerprint is used to
81
point to the target location.
82
'a' is an absent entry: In that tree the id is not present at this path.
83
'd' is a directory entry: This path in this tree is a directory with the
84
current file id. There is no fingerprint for directories.
85
'f' is a file entry: As for directory, but it's a file. The fingerprint is the
86
sha1 value of the file's canonical form, i.e. after any read filters have
87
been applied to the convenience form stored in the working tree.
88
'l' is a symlink entry: As for directory, but a symlink. The fingerprint is the
90
't' is a reference to a nested subtree; the fingerprint is the referenced
106
The entries on disk and in memory are ordered according to the following keys::
95
The entries on disk and in memory are ordered according to the following keys:
108
97
directory, as a list of components
112
101
--- Format 1 had the following different definition: ---
116
rows = dirname, NULL, basename, NULL, MINIKIND, NULL, fileid_utf8, NULL,
117
WHOLE NUMBER (* size *), NULL, packed stat, NULL, sha1|symlink target,
119
PARENT ROW = NULL, revision_utf8, NULL, MINIKIND, NULL, dirname, NULL,
120
basename, NULL, WHOLE NUMBER (* size *), NULL, "y" | "n", NULL,
102
rows = dirname, NULL, basename, NULL, MINIKIND, NULL, fileid_utf8, NULL,
103
WHOLE NUMBER (* size *), NULL, packed stat, NULL, sha1|symlink target,
105
PARENT ROW = NULL, revision_utf8, NULL, MINIKIND, NULL, dirname, NULL,
106
basename, NULL, WHOLE NUMBER (* size *), NULL, "y" | "n", NULL,
123
109
PARENT ROW's are emitted for every parent that is not in the ghosts details
124
110
line. That is, if the parents are foo, bar, baz, and the ghosts are bar, then
249
232
ERROR_DIRECTORY = 267
235
if not getattr(struct, '_compile', None):
236
# Cannot pre-compile the dirstate pack_stat
237
def pack_stat(st, _encode=binascii.b2a_base64, _pack=struct.pack):
238
"""Convert stat values into a packed representation."""
239
return _encode(_pack('>LLLLLL', st.st_size, int(st.st_mtime),
240
int(st.st_ctime), st.st_dev, st.st_ino & 0xFFFFFFFF,
243
# compile the struct compiler we need, so as to only do it once
244
from _struct import Struct
245
_compiled_pack = Struct('>LLLLLL').pack
246
def pack_stat(st, _encode=binascii.b2a_base64, _pack=_compiled_pack):
247
"""Convert stat values into a packed representation."""
248
# jam 20060614 it isn't really worth removing more entries if we
249
# are going to leave it in packed form.
250
# With only st_mtime and st_mode filesize is 5.5M and read time is 275ms
251
# With all entries, filesize is 5.9M and read time is maybe 280ms
252
# well within the noise margin
254
# base64 encoding always adds a final newline, so strip it off
255
# The current version
256
return _encode(_pack(st.st_size, int(st.st_mtime), int(st.st_ctime),
257
st.st_dev, st.st_ino & 0xFFFFFFFF, st.st_mode))[:-1]
258
# This is 0.060s / 1.520s faster by not encoding as much information
259
# return _encode(_pack('>LL', int(st.st_mtime), st.st_mode))[:-1]
260
# This is not strictly faster than _encode(_pack())[:-1]
261
# return '%X.%X.%X.%X.%X.%X' % (
262
# st.st_size, int(st.st_mtime), int(st.st_ctime),
263
# st.st_dev, st.st_ino, st.st_mode)
264
# Similar to the _encode(_pack('>LL'))
265
# return '%X.%X' % (int(st.st_mtime), st.st_mode)
268
def _unpack_stat(packed_stat):
269
"""Turn a packed_stat back into the stat fields.
271
This is meant as a debugging tool, should not be used in real code.
273
(st_size, st_mtime, st_ctime, st_dev, st_ino,
274
st_mode) = struct.unpack('>LLLLLL', binascii.a2b_base64(packed_stat))
275
return dict(st_size=st_size, st_mtime=st_mtime, st_ctime=st_ctime,
276
st_dev=st_dev, st_ino=st_ino, st_mode=st_mode)
252
279
class SHA1Provider(object):
253
280
"""An interface for getting sha1s of a file."""
401
423
self._last_block_index = None
402
424
self._last_entry_index = None
403
# The set of known hash changes
404
self._known_hash_changes = set()
405
# How many hash changed entries can we have without saving
406
self._worth_saving_limit = worth_saving_limit
407
self._config_stack = config.LocationStack(urlutils.local_path_to_url(
410
426
def __repr__(self):
411
427
return "%s(%r)" % \
412
428
(self.__class__.__name__, self._filename)
414
def _mark_modified(self, hash_changed_entries=None, header_modified=False):
415
"""Mark this dirstate as modified.
417
:param hash_changed_entries: if non-None, mark just these entries as
418
having their hash modified.
419
:param header_modified: mark the header modified as well, not just the
422
#trace.mutter_callsite(3, "modified hash entries: %s", hash_changed_entries)
423
if hash_changed_entries:
424
self._known_hash_changes.update([e[0] for e in hash_changed_entries])
425
if self._dirblock_state in (DirState.NOT_IN_MEMORY,
426
DirState.IN_MEMORY_UNMODIFIED):
427
# If the dirstate is already marked a IN_MEMORY_MODIFIED, then
428
# that takes precedence.
429
self._dirblock_state = DirState.IN_MEMORY_HASH_MODIFIED
431
# TODO: Since we now have a IN_MEMORY_HASH_MODIFIED state, we
432
# should fail noisily if someone tries to set
433
# IN_MEMORY_MODIFIED but we don't have a write-lock!
434
# We don't know exactly what changed so disable smart saving
435
self._dirblock_state = DirState.IN_MEMORY_MODIFIED
437
self._header_state = DirState.IN_MEMORY_MODIFIED
439
def _mark_unmodified(self):
440
"""Mark this dirstate as unmodified."""
441
self._header_state = DirState.IN_MEMORY_UNMODIFIED
442
self._dirblock_state = DirState.IN_MEMORY_UNMODIFIED
443
self._known_hash_changes = set()
445
430
def add(self, path, file_id, kind, stat, fingerprint):
446
431
"""Add a path to be tracked.
1508
1495
if basename_utf8:
1509
1496
parents.add((dirname_utf8, inv_entry.parent_id))
1510
1497
if old_path is None:
1511
old_path_utf8 = None
1513
old_path_utf8 = encode(old_path)
1514
if old_path is None:
1515
adds.append((None, new_path_utf8, file_id,
1498
adds.append((None, encode(new_path), file_id,
1516
1499
inv_to_entry(inv_entry), True))
1517
1500
new_ids.add(file_id)
1518
1501
elif new_path is None:
1519
deletes.append((old_path_utf8, None, file_id, None, True))
1520
elif (old_path, new_path) == root_only:
1521
# change things in-place
1522
# Note: the case of a parent directory changing its file_id
1523
# tends to break optimizations here, because officially
1524
# the file has actually been moved, it just happens to
1525
# end up at the same path. If we can figure out how to
1526
# handle that case, we can avoid a lot of add+delete
1527
# pairs for objects that stay put.
1528
# elif old_path == new_path:
1529
changes.append((old_path_utf8, new_path_utf8, file_id,
1530
inv_to_entry(inv_entry)))
1502
deletes.append((encode(old_path), None, file_id, None, True))
1503
elif (old_path, new_path) != root_only:
1533
1505
# Because renames must preserve their children we must have
1534
1506
# processed all relocations and removes before hand. The sort
1544
1516
self._update_basis_apply_deletes(deletes)
1546
1518
# Split into an add/delete pair recursively.
1547
adds.append((old_path_utf8, new_path_utf8, file_id,
1548
inv_to_entry(inv_entry), False))
1519
adds.append((None, new_path_utf8, file_id,
1520
inv_to_entry(inv_entry), False))
1549
1521
# Expunge deletes that we've seen so that deleted/renamed
1550
1522
# children of a rename directory are handled correctly.
1551
new_deletes = reversed(list(
1552
self._iter_child_entries(1, old_path_utf8)))
1523
new_deletes = reversed(list(self._iter_child_entries(1,
1553
1525
# Remove the current contents of the tree at orig_path, and
1554
1526
# reinsert at the correct new path.
1555
1527
for entry in new_deletes:
1556
child_dirname, child_basename, child_file_id = entry[0]
1558
source_path = child_dirname + '/' + child_basename
1529
source_path = entry[0][0] + '/' + entry[0][1]
1560
source_path = child_basename
1531
source_path = entry[0][1]
1561
1532
if new_path_utf8:
1562
1533
target_path = new_path_utf8 + source_path[len(old_path):]
1564
1535
if old_path == '':
1565
1536
raise AssertionError("cannot rename directory to"
1567
1538
target_path = source_path[len(old_path) + 1:]
1568
1539
adds.append((None, target_path, entry[0][2], entry[1][1], False))
1569
1540
deletes.append(
1570
1541
(source_path, target_path, entry[0][2], None, False))
1571
deletes.append((old_path_utf8, new_path, file_id, None, False))
1543
(encode(old_path), new_path, file_id, None, False))
1545
# changes to just the root should not require remove/insertion
1547
changes.append((encode(old_path), encode(new_path), file_id,
1548
inv_to_entry(inv_entry)))
1572
1549
self._check_delta_ids_absent(new_ids, delta, 1)
1574
1551
# Finish expunging deletes/first half of renames.
1631
1609
# Adds are accumulated partly from renames, so can be in any input
1632
1610
# order - sort it.
1633
# TODO: we may want to sort in dirblocks order. That way each entry
1634
# will end up in the same directory, allowing the _get_entry
1635
# fast-path for looking up 2 items in the same dir work.
1636
adds.sort(key=lambda x: x[1])
1637
1612
# adds is now in lexographic order, which places all parents before
1638
1613
# their children, so we can process it linearly.
1640
st = static_tuple.StaticTuple
1641
1615
for old_path, new_path, file_id, new_details, real_add in adds:
1642
dirname, basename = osutils.split(new_path)
1643
entry_key = st(dirname, basename, file_id)
1644
block_index, present = self._find_block_index_from_key(entry_key)
1646
self._raise_invalid(new_path, file_id,
1647
"Unable to find block for this record."
1648
" Was the parent added?")
1649
block = self._dirblocks[block_index][1]
1650
entry_index, present = self._find_entry_index(entry_key, block)
1652
if old_path is not None:
1653
self._raise_invalid(new_path, file_id,
1654
'considered a real add but still had old_path at %s'
1657
entry = block[entry_index]
1658
basis_kind = entry[1][1][0]
1659
if basis_kind == 'a':
1660
entry[1][1] = new_details
1661
elif basis_kind == 'r':
1662
raise NotImplementedError()
1664
self._raise_invalid(new_path, file_id,
1665
"An entry was marked as a new add"
1666
" but the basis target already existed")
1668
# The exact key was not found in the block. However, we need to
1669
# check if there is a key next to us that would have matched.
1670
# We only need to check 2 locations, because there are only 2
1672
for maybe_index in range(entry_index-1, entry_index+1):
1673
if maybe_index < 0 or maybe_index >= len(block):
1675
maybe_entry = block[maybe_index]
1676
if maybe_entry[0][:2] != (dirname, basename):
1677
# Just a random neighbor
1679
if maybe_entry[0][2] == file_id:
1680
raise AssertionError(
1681
'_find_entry_index didnt find a key match'
1682
' but walking the data did, for %s'
1684
basis_kind = maybe_entry[1][1][0]
1685
if basis_kind not in 'ar':
1686
self._raise_invalid(new_path, file_id,
1687
"we have an add record for path, but the path"
1688
" is already present with another file_id %s"
1689
% (maybe_entry[0][2],))
1691
entry = (entry_key, [DirState.NULL_PARENT_DETAILS,
1693
block.insert(entry_index, entry)
1695
active_kind = entry[1][0][0]
1696
if active_kind == 'a':
1697
# The active record shows up as absent, this could be genuine,
1698
# or it could be present at some other location. We need to
1700
id_index = self._get_id_index()
1701
# The id_index may not be perfectly accurate for tree1, because
1702
# we haven't been keeping it updated. However, it should be
1703
# fine for tree0, and that gives us enough info for what we
1705
keys = id_index.get(file_id, ())
1707
block_i, entry_i, d_present, f_present = \
1708
self._get_block_entry_index(key[0], key[1], 0)
1711
active_entry = self._dirblocks[block_i][1][entry_i]
1712
if (active_entry[0][2] != file_id):
1713
# Some other file is at this path, we don't need to
1716
real_active_kind = active_entry[1][0][0]
1717
if real_active_kind in 'ar':
1718
# We found a record, which was not *this* record,
1719
# which matches the file_id, but is not actually
1720
# present. Something seems *really* wrong.
1721
self._raise_invalid(new_path, file_id,
1722
"We found a tree0 entry that doesnt make sense")
1723
# Now, we've found a tree0 entry which matches the file_id
1724
# but is at a different location. So update them to be
1726
active_dir, active_name = active_entry[0][:2]
1728
active_path = active_dir + '/' + active_name
1730
active_path = active_name
1731
active_entry[1][1] = st('r', new_path, 0, False, '')
1732
entry[1][0] = st('r', active_path, 0, False, '')
1733
elif active_kind == 'r':
1734
raise NotImplementedError()
1736
new_kind = new_details[0]
1738
self._ensure_block(block_index, entry_index, new_path)
1616
# the entry for this file_id must be in tree 0.
1617
entry = self._get_entry(0, file_id, new_path)
1618
if entry[0] is None or entry[0][2] != file_id:
1619
self._changes_aborted = True
1620
raise errors.InconsistentDelta(new_path, file_id,
1621
'working tree does not contain new entry')
1622
if real_add and entry[1][1][0] not in absent:
1623
self._changes_aborted = True
1624
raise errors.InconsistentDelta(new_path, file_id,
1625
'The entry was considered to be a genuinely new record,'
1626
' but there was already an old record for it.')
1627
# We don't need to update the target of an 'r' because the handling
1628
# of renames turns all 'r' situations into a delete at the original
1630
entry[1][1] = new_details
1740
1632
def _update_basis_apply_changes(self, changes):
1741
1633
"""Apply a sequence of changes to tree 1 during update_basis_by_delta.
1767
1665
null = DirState.NULL_PARENT_DETAILS
1768
1666
for old_path, new_path, file_id, _, real_delete in deletes:
1769
1667
if real_delete != (new_path is None):
1770
self._raise_invalid(old_path, file_id, "bad delete delta")
1668
self._changes_aborted = True
1669
raise AssertionError("bad delete delta")
1771
1670
# the entry for this file_id must be in tree 1.
1772
1671
dirname, basename = osutils.split(old_path)
1773
1672
block_index, entry_index, dir_present, file_present = \
1774
1673
self._get_block_entry_index(dirname, basename, 1)
1775
1674
if not file_present:
1776
self._raise_invalid(old_path, file_id,
1675
self._changes_aborted = True
1676
raise errors.InconsistentDelta(old_path, file_id,
1777
1677
'basis tree does not contain removed entry')
1778
1678
entry = self._dirblocks[block_index][1][entry_index]
1779
# The state of the entry in the 'active' WT
1780
active_kind = entry[1][0][0]
1781
1679
if entry[0][2] != file_id:
1782
self._raise_invalid(old_path, file_id,
1680
self._changes_aborted = True
1681
raise errors.InconsistentDelta(old_path, file_id,
1783
1682
'mismatched file_id in tree 1')
1785
old_kind = entry[1][1][0]
1786
if active_kind in 'ar':
1787
# The active tree doesn't have this file_id.
1788
# The basis tree is changing this record. If this is a
1789
# rename, then we don't want the record here at all
1790
# anymore. If it is just an in-place change, we want the
1791
# record here, but we'll add it if we need to. So we just
1793
if active_kind == 'r':
1794
active_path = entry[1][0][1]
1795
active_entry = self._get_entry(0, file_id, active_path)
1796
if active_entry[1][1][0] != 'r':
1797
self._raise_invalid(old_path, file_id,
1798
"Dirstate did not have matching rename entries")
1799
elif active_entry[1][0][0] in 'ar':
1800
self._raise_invalid(old_path, file_id,
1801
"Dirstate had a rename pointing at an inactive"
1803
active_entry[1][1] = null
1684
if entry[1][0][0] != 'a':
1685
self._changes_aborted = True
1686
raise errors.InconsistentDelta(old_path, file_id,
1687
'This was marked as a real delete, but the WT state'
1688
' claims that it still exists and is versioned.')
1804
1689
del self._dirblocks[block_index][1][entry_index]
1806
# This was a directory, and the active tree says it
1807
# doesn't exist, and now the basis tree says it doesn't
1808
# exist. Remove its dirblock if present
1810
present) = self._find_block_index_from_key(
1813
dir_block = self._dirblocks[dir_block_index][1]
1815
# This entry is empty, go ahead and just remove it
1816
del self._dirblocks[dir_block_index]
1818
# There is still an active record, so just mark this
1821
block_i, entry_i, d_present, f_present = \
1822
self._get_block_entry_index(old_path, '', 1)
1824
dir_block = self._dirblocks[block_i][1]
1825
for child_entry in dir_block:
1826
child_basis_kind = child_entry[1][1][0]
1827
if child_basis_kind not in 'ar':
1828
self._raise_invalid(old_path, file_id,
1829
"The file id was deleted but its children were "
1691
if entry[1][0][0] == 'a':
1692
self._changes_aborted = True
1693
raise errors.InconsistentDelta(old_path, file_id,
1694
'The entry was considered a rename, but the source path'
1695
' is marked as absent.')
1696
# For whatever reason, we were asked to rename an entry
1697
# that was originally marked as deleted. This could be
1698
# because we are renaming the parent directory, and the WT
1699
# current state has the file marked as deleted.
1700
elif entry[1][0][0] == 'r':
1701
# implement the rename
1702
del self._dirblocks[block_index][1][entry_index]
1704
# it is being resurrected here, so blank it out temporarily.
1705
self._dirblocks[block_index][1][entry_index][1][1] = null
1832
1707
def _after_delta_check_parents(self, parents, index):
1833
1708
"""Check that parents required by the delta are all intact.
2461
2331
trace.mutter('Not saving DirState because '
2462
2332
'_changes_aborted is set.')
2464
# TODO: Since we now distinguish IN_MEMORY_MODIFIED from
2465
# IN_MEMORY_HASH_MODIFIED, we should only fail quietly if we fail
2466
# to save an IN_MEMORY_HASH_MODIFIED, and fail *noisily* if we
2467
# fail to save IN_MEMORY_MODIFIED
2468
if not self._worth_saving():
2334
if (self._header_state == DirState.IN_MEMORY_MODIFIED or
2335
self._dirblock_state == DirState.IN_MEMORY_MODIFIED):
2471
grabbed_write_lock = False
2472
if self._lock_state != 'w':
2473
grabbed_write_lock, new_lock = self._lock_token.temporary_write_lock()
2474
# Switch over to the new lock, as the old one may be closed.
2475
# TODO: jam 20070315 We should validate the disk file has
2476
# not changed contents, since temporary_write_lock may
2477
# not be an atomic operation.
2478
self._lock_token = new_lock
2479
self._state_file = new_lock.f
2480
if not grabbed_write_lock:
2481
# We couldn't grab a write lock, so we switch back to a read one
2484
lines = self.get_lines()
2485
self._state_file.seek(0)
2486
self._state_file.writelines(lines)
2487
self._state_file.truncate()
2488
self._state_file.flush()
2489
self._maybe_fdatasync()
2490
self._mark_unmodified()
2492
if grabbed_write_lock:
2493
self._lock_token = self._lock_token.restore_read_lock()
2494
self._state_file = self._lock_token.f
2337
grabbed_write_lock = False
2338
if self._lock_state != 'w':
2339
grabbed_write_lock, new_lock = self._lock_token.temporary_write_lock()
2340
# Switch over to the new lock, as the old one may be closed.
2495
2341
# TODO: jam 20070315 We should validate the disk file has
2496
# not changed contents. Since restore_read_lock may
2497
# not be an atomic operation.
2499
def _maybe_fdatasync(self):
2500
"""Flush to disk if possible and if not configured off."""
2501
if self._config_stack.get('dirstate.fdatasync'):
2502
osutils.fdatasync(self._state_file.fileno())
2504
def _worth_saving(self):
2505
"""Is it worth saving the dirstate or not?"""
2506
if (self._header_state == DirState.IN_MEMORY_MODIFIED
2507
or self._dirblock_state == DirState.IN_MEMORY_MODIFIED):
2509
if self._dirblock_state == DirState.IN_MEMORY_HASH_MODIFIED:
2510
if self._worth_saving_limit == -1:
2511
# We never save hash changes when the limit is -1
2513
# If we're using smart saving and only a small number of
2514
# entries have changed their hash, don't bother saving. John has
2515
# suggested using a heuristic here based on the size of the
2516
# changed files and/or tree. For now, we go with a configurable
2517
# number of changes, keeping the calculation time
2518
# as low overhead as possible. (This also keeps all existing
2519
# tests passing as the default is 0, i.e. always save.)
2520
if len(self._known_hash_changes) >= self._worth_saving_limit:
2342
# not changed contents. Since temporary_write_lock may
2343
# not be an atomic operation.
2344
self._lock_token = new_lock
2345
self._state_file = new_lock.f
2346
if not grabbed_write_lock:
2347
# We couldn't grab a write lock, so we switch back to a read one
2350
self._state_file.seek(0)
2351
self._state_file.writelines(self.get_lines())
2352
self._state_file.truncate()
2353
self._state_file.flush()
2354
self._header_state = DirState.IN_MEMORY_UNMODIFIED
2355
self._dirblock_state = DirState.IN_MEMORY_UNMODIFIED
2357
if grabbed_write_lock:
2358
self._lock_token = self._lock_token.restore_read_lock()
2359
self._state_file = self._lock_token.f
2360
# TODO: jam 20070315 We should validate the disk file has
2361
# not changed contents. Since restore_read_lock may
2362
# not be an atomic operation.
2524
2364
def _set_data(self, parent_ids, dirblocks):
2525
2365
"""Set the full dirstate data in memory.