1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
|
Commit Performance Notes
========================
.. contents::
Commit: The Minimum Work Required
---------------------------------
This is Martin Pool's email to the mailing list on the minimum work that
commit needs to do. Be sure to check the mailing list archives
(https://lists.ubuntu.com/archives/bazaar/2007q2/025791.html)
for follow-up comments from Robert and others ...
Here is a description of the minimum work that commit must do. We
want to make sure that our design doesn't cost too much more than this
minimum. I am trying to do this without making too many assumptions
about the underlying storage, but am assuming that the ui and basic
architecture (wt, branch, repo) stays about the same.
The basic purpose of commit is to:
1. create and store a new revision based on the contents of the working tree
2. make this the new basis revision for the working tree
We can do a selected commit of only some files or subtrees.
The best performance we could hope for is:
- stat each versioned selected working file once
- read from the workingtree and write into the repository any new file texts
- in general, do work proportional to the size of the shape (eg
inventory) of the old and new selected trees, and to the total size of
the modified files
In more detail:
1.0 - Store new file texts: if a versioned file contains a new text
there is no avoiding storing it. To determine which ones have changed
we must go over the workingtree and at least stat each file. If the
file is modified since it was last hashed, it must be read in.
Ideally we would read it only once, and either notice that it has not
changed, or store it at that point.
On the other hand we want new code to be able to handle files that are
larger than will fit in memory. We may then need to read each file up
to two times: once to determine if there is a new text and calculate
its hash, and again to store it.
1.1 - Store a tree-shape description (ie inventory or similar.) This
describes the non-file objects, and provides a reference from the
Revision to the texts within it.
1.2 - Generate and store a new revision object.
1.3 - Do delta-compression on the stored objects. (git notably does
not do this at commit time, deferring this entirely until later.)
This requires finding the appropriate basis for each modified file: in
the current scheme we get the file id, last-revision from the
dirstate, look into the knit for that text, extract that text in
total, generate a delta, then store that into the knit. Most delta
operations are O(n2) to O(n3) in the size of the modified files.
1.4 - Cache annotation information for the changes: at the moment this
is done as part of the delta storage. There are some flaws in that
approach, such as that it is not updated when ghosts are filled, and
the annotation can't be re-run with new diff parameters.
2.1 - Make the new revision the basis for the tree, and clear the list
of parents. Strictly this is all that's logically necessary, unless
the working tree format requires more work.
The dirstate format does require more work, because it caches the
parent tree data for each file within the working tree data. In
practice this means that every commit rewrites the entire dirstate
file - we could try to avoid rewriting the whole file but this may be
difficult because variable-length data (the last-changed revision id)
is inserted into many rows.
The current dirstate design then seems to mean that any commit of a
single file imposes a cost proportional to the size of the current
workingtree. Maybe there are other benefits that outweigh this.
Alternatively if it was fast enough for operations to always look at
the original storage of the parent trees we could do without the
cache.
2.2 - Record the observed file hashes into the workingtree control
files. For the files that we just committed, we have the information
to store a valid hash cache entry: we know their stat information and
the sha1 of the file contents. This is not strictly necessary to the
speed of commit, but it will be useful later in avoiding reading those
files, and the only cost of doing it now is writing it out.
In fact there are some user interface niceties that complicate this:
3 - Before starting the commit proper, we prompt for a commit message
and in that commit message editor we show a list of the files that
will be committed: basically the output of bzr status. This is
basically the same as the list of changes we detect while storing the
commit, but because the user will sometimes change the tree after
opening the commit editor and expect the final state to be committed I
think we do have to look for changes twice. Since it takes the user a
while to enter a message this is not a big problem as long as both the
status summary and the commit are individually fast.
4 - As the commit proceeds (or after?) we show another status-like
summary. Just printing the names of modified files as they're stored
would be easy. Recording deleted and renamed files or directories is
more work: this can only be done by reference to the primary parent
tree and requires it be read in. Worse, reporting renames requires
searching by id across the entire parent tree. Possibly full
reporting should be a default-off verbose option because it does
require more work beyond the commit itself.
5 - Bazaar currently allows for missing files to be automatically
marked as removed at the time of commit. Leaving aside the ui
consequences, this means that we have to update the working inventory
to mark these files as removed. Since as discussed above we always
have to rewrite the dirstate on commit this is not substantial, though
we should make sure we do this in one pass, not two. I have
previously proposed to make this behaviour a non-default option.
We may need to run hooks or generate signatures during commit, but
they don't seem to have substantial performance consequences.
If one wanted to optimize solely for the speed of commit I think
hash-addressed file-per-text storage like in git (or bzr 0.1) is very
good. Remarkably, it does not need to read the inventory for the
previous revision. For each versioned file, we just need to get its
hash, either by reading the file or validating its stat data. If that
hash is not already in the repository, the file is just copied in and
compressed. As directories are traversed, they're turned into texts
and stored as well, and then finally the revision is too. This does
depend on later doing some delta compression of these texts.
Variations on this are possible. Rather than writing a single file
into the repository for each text, we could fold them into a single
collation or pack file. That would create a smaller number of files
in the repository, but looking up a single text would require looking
into their indexes rather than just asking the filesystem.
Rather than using hashes we can use file-id/rev-id pairs as at
present, which has several consequences pro and con.
Commit vs Status
----------------
At first glance, commit simply stores the changes status reports. In fact,
this isn't technically correct: commit considers some files modified that
status does not. The notes below were put together by John Arbash Meinel
and Aaron Bentley in May 2007 to explain the finer details of commit to
Ian Clatworthy. They are recorded here as they are likely to be useful to
others new to Bazaar ...
1) **Unknown files have a different effect.** With --no-strict (the default)
they have no effect and can be completely ignored. With --strict they
should cause the commit to abort (so you don't forget to add the two new
test files that you just created).
2) **Multiple parents.** 'status' always compares 2 trees, typically the
last-committed tree and the current working tree. 'commit' will compare
more trees if there has been a merge.
a) The "last modified" property for files.
A file may be marked as changed since the last commit, but that
change may have come in from the merge, and the change could have
happened several commits back. There are several edge cases to be
handled here, like if both branches modified the same file, or if
just one branch modified it.
b) The trickier case is when a file appears unmodified since last
commit, but it was modified versus one of the merged branches. I
believe there are a few ways this can happen, like if a merged
branch changes a file and then reverts it back (you still update
the 'last modified' field).
In general, if both sides disagree on the 'last-modified' flag,
then you need to generate a new entry pointing 'last-modified' at
this revision (because you are resolving the differences between
the 2 parents).
3) **Automatic deletion of 'missing' files.** This is a point that we go
back and forth on. I think the basic idea is that 'bzr commit' by
default should abort if it finds a 'missing' file (in case that file was
renamed rather than deleted), but 'bzr commit --auto' can add unknown
files and remove missing files automatically.
4) **sha1 for newly added files.** status doesn't really need this: it should
only care that the file is not present in base, but is present now. In
some ways commit doesn't care either, since it needs to read and sha the
file itself anyway.
5) **Nested trees.** status doesn't recurse into nested trees, but commit does.
This is just because not all of the nested-trees work has been merged yet.
A tree-reference is considered modified if the subtree has been
committed since the last containing-tree commit. But commit needs to
recurse into every subtree, to ensure that a commit is done if the
subtree has changed since its last commit. _iter_changes only reports
on tree-references that are modified, so it can't be used for doing
subtree commits.
Avoiding Work: Smarter Change Detection
---------------------------------------
Commit currently walks through every file building an inventory. Here is
Aaron's brain dump on a better way ...
_iter_changes won't tell us about tree references that haven't changed,
even if those subtrees have changed. (Unless we ask for unchanged
files, which we don't want to do, of course.)
There is an iter_references method, but using it looks just as expensive
as calling kind().
I did some work on updating commit to use iter_changes, but found for
multi-parent trees, I had to fall back to the slow inventory comparison
approach.
Really, I think we need a call akin to iter_changes that handles
multiple parents, and knows to emit entries when InventoryEntry.revision
is all that's changed.
Avoiding Work: Better Layering
------------------------------
For each file, commit is currently doing more work than it should. Here is
John's take on a better way ...
Note that "_iter_changes" *does* have to touch every path on disk, but
it just can do it in a more efficient manner. (It doesn't have to create
an InventoryEntry for all the ones that haven't changed).
I agree with Aaron that we need something a little different than
_iter_changes. Both because of handling multiple parents, as well as we
don't want it to actually read the files if we have a stat-cache miss.
Specifically, the commit code *has* to read the files because it is
going to add the text to the repository, and we want it to compute the
sha1 at *that* time, so we are guaranteed to have the valid sha (rather
than just whatever the last cached one was). So we want the code to
return 'None' if it doesn't have an up-to-date sha1, rather than reading
the file and computing it, just before it returns it to the parent.
The commit code (0.16) should really be restructured. It's layering is
pretty wrong.
Specifically, calling "kind()" requires a stat of the file. But we have
to do a stat to get the size/whether the record is up-to-date, etc. So
we really need to have a "create_an_up_to_date_inventory()" function.
But because we are accessing every object on disk, we want to be working
in tuples rather than Inventory objects. And because DirState already
has the parent records next to the current working inventory, it can do
all the work to do really fast comparison and throw-away of unimportant
records.
The way I made "bzr status" fast is by moving the 'ignore this record'
ability as deep into the stack as I could get. Status has the property
that you don't care about most of the records, just like commit. So the
sooner you can stop evaluating the 99% that you don't care about, the
less work you do.
|