3619.3.1
by Andrew Bennetts
Move the notes on writing tests out of HACKING into a new file, and improve |
1 |
======================= |
2 |
Guide to Testing Bazaar |
|
3 |
======================= |
|
4 |
||
5 |
.. contents:: |
|
6 |
||
7 |
Testing Bazaar |
|
8 |
############## |
|
9 |
||
10 |
The Importance of Testing |
|
11 |
========================= |
|
12 |
||
13 |
Reliability is a critical success factor for any Version Control System. |
|
14 |
We want Bazaar to be highly reliable across multiple platforms while |
|
15 |
evolving over time to meet the needs of its community. |
|
16 |
||
17 |
In a nutshell, this is what we expect and encourage: |
|
18 |
||
19 |
* New functionality should have test cases. Preferably write the |
|
20 |
test before writing the code. |
|
21 |
||
22 |
In general, you can test at either the command-line level or the |
|
23 |
internal API level. See `Writing tests`_ below for more detail. |
|
24 |
||
25 |
* Try to practice Test-Driven Development: before fixing a bug, write a |
|
26 |
test case so that it does not regress. Similarly for adding a new |
|
27 |
feature: write a test case for a small version of the new feature before |
|
28 |
starting on the code itself. Check the test fails on the old code, then |
|
29 |
add the feature or fix and check it passes. |
|
30 |
||
31 |
By doing these things, the Bazaar team gets increased confidence that |
|
32 |
changes do what they claim to do, whether provided by the core team or |
|
33 |
by community members. Equally importantly, we can be surer that changes |
|
34 |
down the track do not break new features or bug fixes that you are |
|
35 |
contributing today. |
|
36 |
||
37 |
As of May 2008, Bazaar ships with a test suite containing over 12000 tests |
|
38 |
and growing. We are proud of it and want to remain so. As community |
|
39 |
members, we all benefit from it. Would you trust version control on |
|
40 |
your project to a product *without* a test suite like Bazaar has? |
|
41 |
||
42 |
||
43 |
Running the Test Suite |
|
44 |
====================== |
|
45 |
||
46 |
Currently, bzr selftest is used to invoke tests. |
|
47 |
You can provide a pattern argument to run a subset. For example, |
|
48 |
to run just the blackbox tests, run:: |
|
49 |
||
50 |
./bzr selftest -v blackbox |
|
51 |
||
52 |
To skip a particular test (or set of tests), use the --exclude option |
|
53 |
(shorthand -x) like so:: |
|
54 |
||
55 |
./bzr selftest -v -x blackbox |
|
56 |
||
57 |
To ensure that all tests are being run and succeeding, you can use the |
|
58 |
--strict option which will fail if there are any missing features or known |
|
59 |
failures, like so:: |
|
60 |
||
61 |
./bzr selftest --strict |
|
62 |
||
63 |
To list tests without running them, use the --list-only option like so:: |
|
64 |
||
65 |
./bzr selftest --list-only |
|
66 |
||
67 |
This option can be combined with other selftest options (like -x) and |
|
68 |
filter patterns to understand their effect. |
|
69 |
||
70 |
Once you understand how to create a list of tests, you can use the --load-list |
|
71 |
option to run only a restricted set of tests that you kept in a file, one test |
|
72 |
id by line. Keep in mind that this will never be sufficient to validate your |
|
73 |
modifications, you still need to run the full test suite for that, but using it |
|
74 |
can help in some cases (like running only the failed tests for some time):: |
|
75 |
||
76 |
./bzr selftest -- load-list my_failing_tests |
|
77 |
||
78 |
This option can also be combined with other selftest options, including |
|
79 |
patterns. It has some drawbacks though, the list can become out of date pretty |
|
80 |
quick when doing Test Driven Development. |
|
81 |
||
82 |
To address this concern, there is another way to run a restricted set of tests: |
|
83 |
the --starting-with option will run only the tests whose name starts with the |
|
84 |
specified string. It will also avoid loading the other tests and as a |
|
85 |
consequence starts running your tests quicker:: |
|
86 |
||
87 |
./bzr selftest --starting-with bzrlib.blackbox |
|
88 |
||
89 |
This option can be combined with all the other selftest options including |
|
90 |
--load-list. The later is rarely used but allows to run a subset of a list of |
|
91 |
failing tests for example. |
|
92 |
||
93 |
||
94 |
Test suite debug flags |
|
95 |
---------------------- |
|
96 |
||
97 |
Similar to the global ``-Dfoo`` debug options, bzr selftest accepts |
|
98 |
``-E=foo`` debug flags. These flags are: |
|
99 |
||
100 |
:allow_debug: do *not* clear the global debug flags when running a test. |
|
101 |
This can provide useful logging to help debug test failures when used |
|
102 |
with e.g. ``bzr -Dhpss selftest -E=allow_debug`` |
|
103 |
||
104 |
||
105 |
Writing Tests |
|
106 |
============= |
|
107 |
||
108 |
Where should I put a new test? |
|
109 |
------------------------------ |
|
110 |
||
111 |
Bzrlib's tests are organised by the type of test. Most of the tests in |
|
112 |
bzr's test suite belong to one of these categories: |
|
113 |
||
114 |
- Unit tests |
|
115 |
- Blackbox (UI) tests |
|
116 |
- Per-implementation tests |
|
117 |
- Doctests |
|
118 |
||
119 |
A quick description of these test types and where they belong in bzrlib's |
|
120 |
source follows. Not all tests fall neatly into one of these categories; |
|
121 |
in those cases use your judgement. |
|
122 |
||
123 |
||
124 |
Unit tests |
|
125 |
~~~~~~~~~~ |
|
126 |
||
127 |
Unit tests make up the bulk of our test suite. These are tests that are |
|
128 |
focused on exercising a single, specific unit of the code as directly |
|
129 |
as possible. Each unit test is generally fairly short and runs very |
|
130 |
quickly. |
|
131 |
||
132 |
They are found in ``bzrlib/tests/test_*.py``. So in general tests should |
|
133 |
be placed in a file named test_FOO.py where FOO is the logical thing under |
|
134 |
test. |
|
135 |
||
136 |
For example, tests for merge3 in bzrlib belong in bzrlib/tests/test_merge3.py. |
|
137 |
See bzrlib/tests/test_sampler.py for a template test script. |
|
138 |
||
139 |
||
140 |
Blackbox (UI) tests |
|
141 |
~~~~~~~~~~~~~~~~~~~ |
|
142 |
||
143 |
Tests can be written for the UI or for individual areas of the library. |
|
144 |
Choose whichever is appropriate: if adding a new command, or a new command |
|
145 |
option, then you should be writing a UI test. If you are both adding UI |
|
146 |
functionality and library functionality, you will want to write tests for |
|
147 |
both the UI and the core behaviours. We call UI tests 'blackbox' tests |
|
148 |
and they belong in ``bzrlib/tests/blackbox/*.py``. |
|
149 |
||
150 |
When writing blackbox tests please honour the following conventions: |
|
151 |
||
152 |
1. Place the tests for the command 'name' in |
|
153 |
bzrlib/tests/blackbox/test_name.py. This makes it easy for developers |
|
154 |
to locate the test script for a faulty command. |
|
155 |
||
156 |
2. Use the 'self.run_bzr("name")' utility function to invoke the command |
|
157 |
rather than running bzr in a subprocess or invoking the |
|
158 |
cmd_object.run() method directly. This is a lot faster than |
|
159 |
subprocesses and generates the same logging output as running it in a |
|
160 |
subprocess (which invoking the method directly does not). |
|
161 |
||
162 |
3. Only test the one command in a single test script. Use the bzrlib |
|
163 |
library when setting up tests and when evaluating the side-effects of |
|
164 |
the command. We do this so that the library api has continual pressure |
|
165 |
on it to be as functional as the command line in a simple manner, and |
|
166 |
to isolate knock-on effects throughout the blackbox test suite when a |
|
167 |
command changes its name or signature. Ideally only the tests for a |
|
168 |
given command are affected when a given command is changed. |
|
169 |
||
170 |
4. If you have a test which does actually require running bzr in a |
|
171 |
subprocess you can use ``run_bzr_subprocess``. By default the spawned |
|
172 |
process will not load plugins unless ``--allow-plugins`` is supplied. |
|
173 |
||
174 |
||
175 |
Per-implementation tests |
|
176 |
~~~~~~~~~~~~~~~~~~~~~~~~ |
|
177 |
||
178 |
Per-implementation tests are tests that are defined once and then run |
|
179 |
against multiple implementations of an interface. For example, |
|
180 |
``test_transport_implementations.py`` defines tests that all Transport |
|
181 |
implementations (local filesystem, HTTP, and so on) must pass. |
|
182 |
||
183 |
They are found in ``bzrlib/tests/*_implementations/test_*.py``, |
|
184 |
``bzrlib/tests/per_*/*.py``, and |
|
185 |
``bzrlib/tests/test_*_implementations.py``. |
|
186 |
||
187 |
These are really a sub-category of unit tests, but an important one. |
|
188 |
||
189 |
||
190 |
Doctests |
|
191 |
~~~~~~~~ |
|
192 |
||
193 |
We make selective use of doctests__. In general they should provide |
|
194 |
*examples* within the API documentation which can incidentally be tested. We |
|
195 |
don't try to test every important case using doctests |--| regular Python |
|
196 |
tests are generally a better solution. That is, we just use doctests to |
|
197 |
make our documentation testable, rather than as a way to make tests. |
|
198 |
||
199 |
Most of these are in ``bzrlib/doc/api``. More additions are welcome. |
|
200 |
||
201 |
__ http://docs.python.org/lib/module-doctest.html |
|
202 |
||
203 |
||
204 |
.. Effort tests |
|
205 |
.. ~~~~~~~~~~~~ |
|
206 |
||
207 |
||
208 |
||
209 |
Skipping tests |
|
210 |
-------------- |
|
211 |
||
212 |
In our enhancements to unittest we allow for some addition results beyond |
|
213 |
just success or failure. |
|
214 |
||
215 |
If a test can't be run, it can say that it's skipped by raising a special |
|
216 |
exception. This is typically used in parameterized tests |--| for example |
|
217 |
if a transport doesn't support setting permissions, we'll skip the tests |
|
218 |
that relating to that. :: |
|
219 |
||
220 |
try: |
|
221 |
return self.branch_format.initialize(repo.bzrdir) |
|
222 |
except errors.UninitializableFormat: |
|
223 |
raise tests.TestSkipped('Uninitializable branch format') |
|
224 |
||
225 |
Raising TestSkipped is a good idea when you want to make it clear that the |
|
226 |
test was not run, rather than just returning which makes it look as if it |
|
227 |
was run and passed. |
|
228 |
||
229 |
Several different cases are distinguished: |
|
230 |
||
231 |
TestSkipped |
|
232 |
Generic skip; the only type that was present up to bzr 0.18. |
|
233 |
||
234 |
TestNotApplicable |
|
235 |
The test doesn't apply to the parameters with which it was run. |
|
236 |
This is typically used when the test is being applied to all |
|
237 |
implementations of an interface, but some aspects of the interface |
|
238 |
are optional and not present in particular concrete |
|
239 |
implementations. (Some tests that should raise this currently |
|
240 |
either silently return or raise TestSkipped.) Another option is |
|
241 |
to use more precise parameterization to avoid generating the test |
|
242 |
at all. |
|
243 |
||
244 |
UnavailableFeature |
|
245 |
The test can't be run because a dependency (typically a Python |
|
246 |
library) is not available in the test environment. These |
|
247 |
are in general things that the person running the test could fix |
|
248 |
by installing the library. It's OK if some of these occur when |
|
249 |
an end user runs the tests or if we're specifically testing in a |
|
250 |
limited environment, but a full test should never see them. |
|
251 |
||
252 |
See `Test feature dependencies`_ below. |
|
253 |
||
254 |
KnownFailure |
|
255 |
The test exists but is known to fail, for example this might be |
|
256 |
appropriate to raise if you've committed a test for a bug but not |
|
257 |
the fix for it, or if something works on Unix but not on Windows. |
|
258 |
||
259 |
Raising this allows you to distinguish these failures from the |
|
260 |
ones that are not expected to fail. If the test would fail |
|
261 |
because of something we don't expect or intend to fix, |
|
262 |
KnownFailure is not appropriate, and TestNotApplicable might be |
|
263 |
better. |
|
264 |
||
265 |
KnownFailure should be used with care as we don't want a |
|
266 |
proliferation of quietly broken tests. |
|
267 |
||
268 |
We plan to support three modes for running the test suite to control the |
|
269 |
interpretation of these results. Strict mode is for use in situations |
|
270 |
like merges to the mainline and releases where we want to make sure that |
|
271 |
everything that can be tested has been tested. Lax mode is for use by |
|
272 |
developers who want to temporarily tolerate some known failures. The |
|
273 |
default behaviour is obtained by ``bzr selftest`` with no options, and |
|
274 |
also (if possible) by running under another unittest harness. |
|
275 |
||
276 |
======================= ======= ======= ======== |
|
277 |
result strict default lax |
|
278 |
======================= ======= ======= ======== |
|
279 |
TestSkipped pass pass pass |
|
280 |
TestNotApplicable pass pass pass |
|
3619.3.2
by Andrew Bennetts
Remove references to unimplemented TestPlatformLimit, remove some redundant (and misplaced) text from 'Test feature dependencies'. |
281 |
UnavailableFeature fail pass pass |
3619.3.1
by Andrew Bennetts
Move the notes on writing tests out of HACKING into a new file, and improve |
282 |
KnownFailure fail pass pass |
283 |
======================= ======= ======= ======== |
|
284 |
||
285 |
||
286 |
Test feature dependencies |
|
287 |
------------------------- |
|
288 |
||
289 |
Writing tests that require a feature |
|
290 |
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ |
|
291 |
||
292 |
Rather than manually checking the environment in each test, a test class |
|
293 |
can declare its dependence on some test features. The feature objects are |
|
294 |
checked only once for each run of the whole test suite. |
|
295 |
||
296 |
(For historical reasons, as of May 2007 many cases that should depend on |
|
297 |
features currently raise TestSkipped.) |
|
298 |
||
299 |
For example:: |
|
300 |
||
301 |
class TestStrace(TestCaseWithTransport): |
|
302 |
||
303 |
_test_needs_features = [StraceFeature] |
|
304 |
||
3619.3.2
by Andrew Bennetts
Remove references to unimplemented TestPlatformLimit, remove some redundant (and misplaced) text from 'Test feature dependencies'. |
305 |
This means all tests in this class need the feature. If the feature is |
306 |
not available the test will be skipped using UnavailableFeature. |
|
3619.3.1
by Andrew Bennetts
Move the notes on writing tests out of HACKING into a new file, and improve |
307 |
|
308 |
Individual tests can also require a feature using the ``requireFeature`` |
|
309 |
method:: |
|
310 |
||
311 |
self.requireFeature(StraceFeature) |
|
312 |
||
313 |
Features already defined in bzrlib.tests include: |
|
314 |
||
315 |
- SymlinkFeature, |
|
316 |
- HardlinkFeature, |
|
317 |
- OsFifoFeature, |
|
318 |
- UnicodeFilenameFeature, |
|
319 |
- FTPServerFeature, and |
|
320 |
- CaseInsensitiveFilesystemFeature. |
|
321 |
||
322 |
||
323 |
Defining a new feature that tests can require |
|
324 |
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ |
|
325 |
||
326 |
New features for use with ``_test_needs_features`` or ``requireFeature`` |
|
327 |
are defined by subclassing ``bzrlib.tests.Feature`` and overriding the |
|
328 |
``_probe`` and ``feature_name`` methods. For example:: |
|
329 |
||
330 |
class _SymlinkFeature(Feature): |
|
331 |
||
332 |
def _probe(self): |
|
333 |
return osutils.has_symlinks() |
|
334 |
||
335 |
def feature_name(self): |
|
336 |
return 'symlinks' |
|
337 |
||
338 |
SymlinkFeature = _SymlinkFeature() |
|
339 |
||
340 |
||
341 |
Testing exceptions and errors |
|
342 |
----------------------------- |
|
343 |
||
344 |
It's important to test handling of errors and exceptions. Because this |
|
345 |
code is often not hit in ad-hoc testing it can often have hidden bugs -- |
|
346 |
it's particularly common to get NameError because the exception code |
|
347 |
references a variable that has since been renamed. |
|
348 |
||
349 |
.. TODO: Something about how to provoke errors in the right way? |
|
350 |
||
351 |
In general we want to test errors at two levels: |
|
352 |
||
353 |
1. A test in ``test_errors.py`` checking that when the exception object is |
|
354 |
constructed with known parameters it produces an expected string form. |
|
355 |
This guards against mistakes in writing the format string, or in the |
|
356 |
``str`` representations of its parameters. There should be one for |
|
357 |
each exception class. |
|
358 |
||
359 |
2. Tests that when an api is called in a particular situation, it raises |
|
360 |
an error of the expected class. You should typically use |
|
361 |
``assertRaises``, which in the Bazaar test suite returns the exception |
|
362 |
object to allow you to examine its parameters. |
|
363 |
||
364 |
In some cases blackbox tests will also want to check error reporting. But |
|
365 |
it can be difficult to provoke every error through the commandline |
|
366 |
interface, so those tests are only done as needed |--| eg in response to a |
|
367 |
particular bug or if the error is reported in an unusual way(?) Blackbox |
|
368 |
tests should mostly be testing how the command-line interface works, so |
|
369 |
should only test errors if there is something particular to the cli in how |
|
370 |
they're displayed or handled. |
|
371 |
||
372 |
||
373 |
Testing warnings |
|
374 |
---------------- |
|
375 |
||
376 |
The Python ``warnings`` module is used to indicate a non-fatal code |
|
377 |
problem. Code that's expected to raise a warning can be tested through |
|
378 |
callCatchWarnings. |
|
379 |
||
380 |
The test suite can be run with ``-Werror`` to check no unexpected errors |
|
381 |
occur. |
|
382 |
||
383 |
However, warnings should be used with discretion. It's not an appropriate |
|
384 |
way to give messages to the user, because the warning is normally shown |
|
385 |
only once per source line that causes the problem. You should also think |
|
386 |
about whether the warning is serious enought that it should be visible to |
|
387 |
users who may not be able to fix it. |
|
388 |
||
389 |
||
390 |
Interface implementation testing and test scenarios |
|
391 |
--------------------------------------------------- |
|
392 |
||
393 |
There are several cases in Bazaar of multiple implementations of a common |
|
394 |
conceptual interface. ("Conceptual" because it's not necessary for all |
|
395 |
the implementations to share a base class, though they often do.) |
|
396 |
Examples include transports and the working tree, branch and repository |
|
397 |
classes. |
|
398 |
||
399 |
In these cases we want to make sure that every implementation correctly |
|
400 |
fulfils the interface requirements. For example, every Transport should |
|
401 |
support the ``has()`` and ``get()`` and ``clone()`` methods. We have a |
|
402 |
sub-suite of tests in ``test_transport_implementations``. (Most |
|
403 |
per-implementation tests are in submodules of ``bzrlib.tests``, but not |
|
404 |
the transport tests at the moment.) |
|
405 |
||
406 |
These tests are repeated for each registered Transport, by generating a |
|
407 |
new TestCase instance for the cross product of test methods and transport |
|
408 |
implementations. As each test runs, it has ``transport_class`` and |
|
409 |
``transport_server`` set to the class it should test. Most tests don't |
|
410 |
access these directly, but rather use ``self.get_transport`` which returns |
|
411 |
a transport of the appropriate type. |
|
412 |
||
413 |
The goal is to run per-implementation only the tests that relate to that |
|
414 |
particular interface. Sometimes we discover a bug elsewhere that happens |
|
415 |
with only one particular transport. Once it's isolated, we can consider |
|
416 |
whether a test should be added for that particular implementation, |
|
417 |
or for all implementations of the interface. |
|
418 |
||
419 |
The multiplication of tests for different implementations is normally |
|
420 |
accomplished by overriding the ``load_tests`` function used to load tests |
|
421 |
from a module. This function typically loads all the tests, then applies |
|
422 |
a TestProviderAdapter to them, which generates a longer suite containing |
|
423 |
all the test variations. |
|
424 |
||
425 |
See also `Per-implementation tests`_ (above). |
|
426 |
||
427 |
||
428 |
Test scenarios |
|
429 |
-------------- |
|
430 |
||
431 |
Some utilities are provided for generating variations of tests. This can |
|
432 |
be used for per-implementation tests, or other cases where the same test |
|
433 |
code needs to run several times on different scenarios. |
|
434 |
||
435 |
The general approach is to define a class that provides test methods, |
|
436 |
which depend on attributes of the test object being pre-set with the |
|
437 |
values to which the test should be applied. The test suite should then |
|
438 |
also provide a list of scenarios in which to run the tests. |
|
439 |
||
440 |
Typically ``multiply_tests_from_modules`` should be called from the test |
|
441 |
module's ``load_tests`` function. |
|
442 |
||
443 |
||
444 |
Test support |
|
445 |
------------ |
|
446 |
||
447 |
We have a rich collection of tools to support writing tests. Please use |
|
448 |
them in preference to ad-hoc solutions as they provide portability and |
|
449 |
performance benefits. |
|
450 |
||
451 |
||
452 |
TestCase and its subclasses |
|
453 |
~~~~~~~~~~~~~~~~~~~~~~~~~~~ |
|
454 |
||
455 |
The ``bzrlib.tests`` module defines many TestCase classes to help you |
|
456 |
write your tests. |
|
457 |
||
458 |
TestCase |
|
459 |
A base TestCase that extends the Python standard library's |
|
460 |
TestCase in several ways. It adds more assertion methods (e.g. |
|
461 |
``assertContainsRe``), ``addCleanup``, and other features (see its API |
|
462 |
docs for details). It also has a ``setUp`` that makes sure that |
|
463 |
global state like registered hooks and loggers won't interfere with |
|
464 |
your test. All tests should use this base class (whether directly or |
|
465 |
via a subclass). |
|
466 |
||
467 |
TestCaseWithMemoryTransport |
|
468 |
Extends TestCase and adds methods like ``get_transport``, |
|
469 |
``make_branch`` and ``make_branch_builder``. The files created are |
|
470 |
stored in a MemoryTransport that is discarded at the end of the test. |
|
471 |
This class is good for tests that need to make branches or use |
|
472 |
transports, but that don't require storing things on disk. All tests |
|
473 |
that create bzrdirs should use this base class (either directly or via |
|
474 |
a subclass) as it ensures that the test won't accidentally operate on |
|
475 |
real branches in your filesystem. |
|
476 |
||
477 |
TestCaseInTempDir |
|
478 |
Extends TestCaseWithMemoryTransport. For tests that really do need |
|
479 |
files to be stored on disk, e.g. because a subprocess uses a file, or |
|
480 |
for testing functionality that accesses the filesystem directly rather |
|
481 |
than via the Transport layer (such as dirstate). |
|
482 |
||
483 |
TestCaseWithTransport |
|
484 |
Extends TestCaseInTempDir. Provides ``get_url`` and |
|
485 |
``get_readonly_url`` facilities. Subclasses can control the |
|
486 |
transports used by setting ``vfs_transport_factory``, |
|
487 |
``transport_server`` and/or ``transport_readonly_server``. |
|
488 |
||
489 |
||
490 |
See the API docs for more details. |
|
491 |
||
492 |
||
493 |
BranchBuilder |
|
494 |
~~~~~~~~~~~~~ |
|
495 |
||
496 |
When writing a test for a feature, it is often necessary to set up a |
|
497 |
branch with a certain history. The ``BranchBuilder`` interface allows the |
|
498 |
creation of test branches in a quick and easy manner. Here's a sample |
|
499 |
session:: |
|
500 |
||
501 |
builder = self.make_branch_builder('relpath') |
|
502 |
builder.build_commit() |
|
503 |
builder.build_commit() |
|
504 |
builder.build_commit() |
|
505 |
branch = builder.get_branch() |
|
506 |
||
507 |
``make_branch_builder`` is a method of ``TestCaseWithMemoryTransport``. |
|
508 |
||
509 |
Note that many current tests create test branches by inheriting from |
|
510 |
``TestCaseWithTransport`` and using the ``make_branch_and_tree`` helper to |
|
511 |
give them a ``WorkingTree`` that they can commit to. However, using the |
|
512 |
newer ``make_branch_builder`` helper is preferred, because it can build |
|
513 |
the changes in memory, rather than on disk. Tests that are explictly |
|
514 |
testing how we work with disk objects should, of course, use a real |
|
515 |
``WorkingTree``. |
|
516 |
||
517 |
Please see bzrlib.branchbuilder for more details. |
|
518 |
||
519 |
||
520 |
TreeBuilder |
|
521 |
~~~~~~~~~~~ |
|
522 |
||
523 |
The ``TreeBuilder`` interface allows the construction of arbitrary trees |
|
524 |
with a declarative interface. A sample session might look like:: |
|
525 |
||
526 |
tree = self.make_branch_and_tree('path') |
|
527 |
builder = TreeBuilder() |
|
528 |
builder.start_tree(tree) |
|
529 |
builder.build(['foo', "bar/", "bar/file"]) |
|
530 |
tree.commit('commit the tree') |
|
531 |
builder.finish_tree() |
|
532 |
||
533 |
Usually a test will create a tree using ``make_branch_and_memory_tree`` (a |
|
534 |
method of ``TestCaseWithMemoryTransport``) or ``make_branch_and_tree`` (a |
|
535 |
method of ``TestCaseWithTransport``). |
|
536 |
||
537 |
Please see bzrlib.treebuilder for more details. |
|
538 |
||
539 |
||
540 |
.. |--| unicode:: U+2014 |
|
541 |
||
542 |
.. |
|
543 |
vim: ft=rst tw=74 ai |