329
329
We make selective use of doctests__. In general they should provide
330
330
*examples* within the API documentation which can incidentally be tested. We
331
331
don't try to test every important case using doctests |--| regular Python
332
tests are generally a better solution. That is, we just use doctests to make
333
our documentation testable, rather than as a way to make tests. Be aware that
334
doctests are not as well isolated as the unit tests, if you need more
335
isolation, you're likely want to write unit tests anyway if only to get a
336
better control of the test environment.
332
tests are generally a better solution. That is, we just use doctests to
333
make our documentation testable, rather than as a way to make tests.
338
335
Most of these are in ``bzrlib/doc/api``. More additions are welcome.
340
337
__ http://docs.python.org/lib/module-doctest.html
342
There is an `assertDoctestExampleMatches` method in
343
`bzrlib.tests.TestCase` that allows you to match against doctest-style
344
string templates (including ``...`` to skip sections) from regular Python
351
``bzrlib/tests/script.py`` allows users to write tests in a syntax very
352
close to a shell session, using a restricted and limited set of commands
353
that should be enough to mimic most of the behaviours.
343
``bzrlib/tests/script.py`` allows users to write tests in a syntax very close to a shell session,
344
using a restricted and limited set of commands that should be enough to mimic
345
most of the behaviours.
355
347
A script is a set of commands, each command is composed of:
375
367
The execution stops as soon as an expected output or an expected error is not
378
If output occurs and no output is expected, the execution stops and the
379
test fails. If unexpected output occurs on the standard error, then
380
execution stops and the test fails.
370
When no output is specified, any ouput from the command is accepted
371
and execution continue.
382
373
If an error occurs and no expected error is specified, the execution stops.
445
You can run files containing shell-like scripts with::
447
$ bzr test-script <script>
449
where ``<script>`` is the path to the file containing the shell-like script.
451
430
The actual use of ScriptRunner within a TestCase looks something like
456
435
def test_unshelve_keep(self):
457
436
# some setup here
458
437
script.run_script(self, '''
460
$ bzr shelve -q --all -m Foo
439
$ bzr shelve --all -m Foo
461
440
$ bzr shelve --list
463
$ bzr unshelve -q --keep
442
$ bzr unshelve --keep
464
443
$ bzr shelve --list
483
To avoid having to specify "-q" for all commands whose output is
484
irrelevant, the run_script() method may be passed the keyword argument
485
``null_output_matches_anything=True``. For example::
487
def test_ignoring_null_output(self):
490
$ bzr ci -m 'first revision' --unchanged
493
""", null_output_matches_anything=True)
496
462
Import tariff tests
497
463
-------------------
518
484
This is done by running the command in a subprocess with
519
``PYTHON_VERBOSE=1``. Starting a whole Python interpreter is pretty slow,
520
so we don't want exhaustive testing here, but just enough to guard against
521
distinct fixed problems.
485
``--profile-imports``. Starting a whole Python interpreter is pretty
486
slow, so we don't want exhaustive testing here, but just enough to guard
487
against distinct fixed problems.
523
489
Assertions about precisely what is loaded tend to be brittle so we instead
524
490
make assertions that particular things aren't loaded.
839
805
whether a test should be added for that particular implementation,
840
806
or for all implementations of the interface.
808
The multiplication of tests for different implementations is normally
809
accomplished by overriding the ``load_tests`` function used to load tests
810
from a module. This function typically loads all the tests, then applies
811
a TestProviderAdapter to them, which generates a longer suite containing
812
all the test variations.
842
814
See also `Per-implementation tests`_ (above).
845
Test scenarios and variations
846
-----------------------------
848
820
Some utilities are provided for generating variations of tests. This can
849
821
be used for per-implementation tests, or other cases where the same test
854
826
values to which the test should be applied. The test suite should then
855
827
also provide a list of scenarios in which to run the tests.
857
A single *scenario* is defined by a `(name, parameter_dict)` tuple. The
858
short string name is combined with the name of the test method to form the
859
test instance name. The parameter dict is merged into the instance's
864
load_tests = load_tests_apply_scenarios
866
class TestCheckout(TestCase):
868
scenarios = multiply_scenarios(
869
VaryByRepositoryFormat(),
873
The `load_tests` declaration or definition should be near the top of the
874
file so its effect can be seen.
829
Typically ``multiply_tests_from_modules`` should be called from the test
830
module's ``load_tests`` function.
977
933
Please see bzrlib.treebuilder for more details.
982
PreviewTrees are based on TreeTransforms. This means they can represent
983
virtually any state that a WorkingTree can have, including unversioned files.
984
They can be used to test the output of anything that produces TreeTransforms,
985
such as merge algorithms and revert. They can also be used to test anything
986
that takes arbitrary Trees as its input.
990
# Get an empty tree to base the transform on.
991
b = self.make_branch('.')
992
empty_tree = b.repository.revision_tree(_mod_revision.NULL_REVISION)
993
tt = TransformPreview(empty_tree)
994
self.addCleanup(tt.finalize)
995
# Empty trees don't have a root, so add it first.
996
root = tt.new_directory('', ROOT_PARENT, 'tree-root')
997
# Set the contents of a file.
998
tt.new_file('new-file', root, 'contents', 'file-id')
999
preview = tt.get_preview_tree()
1000
# Test the contents.
1001
self.assertEqual('contents', preview.get_file_text('file-id'))
1003
PreviewTrees can stack, with each tree falling back to the previous::
1005
tt2 = TransformPreview(preview)
1006
self.addCleanup(tt2.finalize)
1007
tt2.new_file('new-file2', tt2.root, 'contents2', 'file-id2')
1008
preview2 = tt2.get_preview_tree()
1009
self.assertEqual('contents', preview2.get_file_text('file-id'))
1010
self.assertEqual('contents2', preview2.get_file_text('file-id2'))
1013
936
Temporarily changing state
1014
937
~~~~~~~~~~~~~~~~~~~~~~~~~~
1019
942
self.overrideAttr(osutils, '_cached_user_encoding', 'latin-1')
1021
Temporarily changing environment variables
1022
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
1024
If yout test needs to temporarily change some environment variable value
1025
(which generally means you want it restored at the end), you can use::
1027
self.overrideEnv('BZR_ENV_VAR', 'new_value')
1029
If you want to remove a variable from the environment, you should use the
1030
special ``None`` value::
1032
self.overrideEnv('PATH', None)
1034
If you add a new feature which depends on a new environment variable, make
1035
sure it behaves properly when this variable is not defined (if applicable) and
1036
if you need to enforce a specific default value, check the
1037
``TestCase._cleanEnvironment`` in ``bzrlib.tests.__init__.py`` which defines a
1038
proper set of values for all tests.