349
349
test was not run, rather than just returning which makes it look as if it
350
350
was run and passed.
352
A subtly different case is a test that should run, but can't run in the
353
current environment. This covers tests that can only run in particular
354
operating systems or locales, or that depend on external libraries. Here
355
we want to inform the user that they didn't get full test coverage, but
356
they possibly could if they installed more libraries. These are expressed
357
as a dependency on a feature so we can summarise them, and so that the
358
test for the feature is done only once. (For historical reasons, as of
359
May 2007 many cases that should depend on features currently raise
360
TestSkipped.) The typical use is::
352
Several different cases are distinguished:
355
Generic skip; the only type that was present up to bzr 0.18.
358
The test doesn't apply to the parameters with which it was run.
359
This is typically used when the test is being applied to all
360
implementations of an interface, but some aspects of the interface
361
are optional and not present in particular concrete
362
implementations. (Some tests that should raise this currently
363
either silently return or raise TestSkipped.) Another option is
364
to use more precise parameterization to avoid generating the test
368
**(Not implemented yet)**
369
The test can't be run because of an inherent limitation of the
370
environment, such as not having symlinks or not supporting
374
The test can't be run because a dependency (typically a Python
375
library) is not available in the test environment. These
376
are in general things that the person running the test could fix
377
by installing the library. It's OK if some of these occur when
378
an end user runs the tests or if we're specifically testing in a
379
limited environment, but a full test should never see them.
382
The test exists but is known to fail, for example because the
383
code to fix it hasn't been run yet. Raising this allows
384
you to distinguish these failures from the ones that are not
385
expected to fail. This could be conditionally raised if something
386
is broken on some platforms but not on others.
388
We plan to support three modes for running the test suite to control the
389
interpretation of these results. Strict mode is for use in situations
390
like merges to the mainline and releases where we want to make sure that
391
everything that can be tested has been tested. Lax mode is for use by
392
developers who want to temporarily tolerate some known failures. The
393
default behaviour is obtained by ``bzr selftest`` with no options, and
394
also (if possible) by running under another unittest harness.
396
======================= ======= ======= ========
397
result strict default lax
398
======================= ======= ======= ========
399
TestSkipped pass pass pass
400
TestNotApplicable pass pass pass
401
TestPlatformLimit pass pass pass
402
TestDependencyMissing fail pass pass
403
KnownFailure fail pass pass
404
======================= ======= ======= ========
407
Test feature dependencies
408
-------------------------
410
Rather than manually checking the environment in each test, a test class
411
can declare its dependence on some test features. The feature objects are
412
checked only once for each run of the whole test suite.
414
For historical reasons, as of May 2007 many cases that should depend on
415
features currently raise TestSkipped.)
362
419
class TestStrace(TestCaseWithTransport):
364
421
_test_needs_features = [StraceFeature]
366
which means all tests in this class need the feature. The feature itself
423
This means all tests in this class need the feature. The feature itself
367
424
should provide a ``_probe`` method which is called once to determine if
427
These should generally be equivalent to either TestDependencyMissing or
428
sometimes TestPlatformLimit.
410
470
they're displayed or handled.
473
Interface implementation testing and test scenarios
474
---------------------------------------------------
476
There are several cases in Bazaar of multiple implementations of a common
477
conceptual interface. ("Conceptual" because
478
it's not necessary for all the implementations to share a base class,
479
though they often do.) Examples include transports and the working tree,
480
branch and repository classes.
482
In these cases we want to make sure that every implementation correctly
483
fulfils the interface requirements. For example, every Transport should
484
support the ``has()`` and ``get()`` and ``clone()`` methods. We have a
485
sub-suite of tests in ``test_transport_implementations``. (Most
486
per-implementation tests are in submodules of ``bzrlib.tests``, but not
487
the transport tests at the moment.)
489
These tests are repeated for each registered Transport, by generating a
490
new TestCase instance for the cross product of test methods and transport
491
implementations. As each test runs, it has ``transport_class`` and
492
``transport_server`` set to the class it should test. Most tests don't
493
access these directly, but rather use ``self.get_transport`` which returns
494
a transport of the appropriate type.
496
The goal is to run per-implementation only tests that relate to that
497
particular interface. Sometimes we discover a bug elsewhere that happens
498
with only one particular transport. Once it's isolated, we can consider
499
whether a test should be added for that particular implementation,
500
or for all implementations of the interface.
502
The multiplication of tests for different implementations is normally
503
accomplished by overriding the ``test_suite`` function used to load
504
tests from a module. This function typically loads all the tests,
505
then applies a TestProviderAdapter to them, which generates a longer
506
suite containing all the test variations.
512
Some utilities are provided for generating variations of tests. This can
513
be used for per-implementation tests, or other cases where the same test
514
code needs to run several times on different scenarios.
516
The general approach is to define a class that provides test methods,
517
which depend on attributes of the test object being pre-set with the
518
values to which the test should be applied. The test suite should then
519
also provide a list of scenarios in which to run the tests.
521
Typically ``multiply_tests_from_modules`` should be called from the test
522
module's ``test_suite`` function.
413
525
Essential Domain Classes
414
526
########################