154
154
cmd_object.run() method directly. This is a lot faster than
155
155
subprocesses and generates the same logging output as running it in a
156
156
subprocess (which invoking the method directly does not).
158
158
3. Only test the one command in a single test script. Use the bzrlib
159
159
library when setting up tests and when evaluating the side-effects of
160
160
the command. We do this so that the library api has continual pressure
174
174
Per-implementation tests are tests that are defined once and then run
175
175
against multiple implementations of an interface. For example,
176
``per_transport.py`` defines tests that all Transport implementations
177
(local filesystem, HTTP, and so on) must pass. They are found in
178
``bzrlib/tests/per_*/*.py``, and ``bzrlib/tests/per_*.py``.
176
``test_transport_implementations.py`` defines tests that all Transport
177
implementations (local filesystem, HTTP, and so on) must pass.
179
They are found in ``bzrlib/tests/*_implementations/test_*.py``,
180
``bzrlib/tests/per_*/*.py``, and
181
``bzrlib/tests/test_*_implementations.py``.
180
183
These are really a sub-category of unit tests, but an important one.
182
Along the same lines are tests for extension modules. We generally have
183
both a pure-python and a compiled implementation for each module. As such,
184
we want to run the same tests against both implementations. These can
185
generally be found in ``bzrlib/tests/*__*.py`` since extension modules are
186
usually prefixed with an underscore. Since there are only two
187
implementations, we have a helper function
188
``bzrlib.tests.permute_for_extension``, which can simplify the
189
``load_tests`` implementation.
233
227
The execution stops as soon as an expected output or an expected error is not
236
230
When no output is specified, any ouput from the command is accepted
237
and execution continue.
231
and execution continue.
239
233
If an error occurs and no expected error is specified, the execution stops.
367
343
The test exists but is known to fail, for example this might be
368
344
appropriate to raise if you've committed a test for a bug but not
369
345
the fix for it, or if something works on Unix but not on Windows.
371
347
Raising this allows you to distinguish these failures from the
372
348
ones that are not expected to fail. If the test would fail
373
349
because of something we don't expect or intend to fix,
377
353
KnownFailure should be used with care as we don't want a
378
354
proliferation of quietly broken tests.
380
ModuleAvailableFeature
381
A helper for handling running tests based on whether a python
382
module is available. This can handle 3rd-party dependencies (is
383
``paramiko`` available?) as well as stdlib (``termios``) or
384
extension modules (``bzrlib._groupcompress_pyx``). You create a
385
new feature instance with::
387
MyModuleFeature = ModuleAvailableFeature('bzrlib.something')
390
def test_something(self):
391
self.requireFeature(MyModuleFeature)
392
something = MyModuleFeature.module
395
356
We plan to support three modes for running the test suite to control the
396
357
interpretation of these results. Strict mode is for use in situations
397
358
like merges to the mainline and releases where we want to make sure that
408
369
UnavailableFeature fail pass pass
409
370
KnownFailure fail pass pass
410
371
======================= ======= ======= ========
413
374
Test feature dependencies
414
375
-------------------------