193
193
<http://babune.ladeuil.net:24842/>. (Babune = Bazaar Buildbot Network.)
196
Running tests in parallel
197
-------------------------
199
Bazaar can use subunit to spawn multiple test processes. There is
200
slightly more chance you will hit ordering or timing-dependent bugs but
203
$ ./bzr selftest --parallel=fork
205
Note that you will need the Subunit library
206
<https://launchpad.net/subunit/> to use this, which is in
207
``python-subunit`` on Ubuntu.
210
Running tests from a ramdisk
211
----------------------------
213
The tests create and delete a lot of temporary files. In some cases you
214
can make the test suite run much faster by running it on a ramdisk. For
218
$ sudo mount -t tmpfs none /ram
219
$ TMPDIR=/ram ./bzr selftest ...
221
You could also change ``/tmp`` in ``/etc/fstab`` to have type ``tmpfs``,
222
if you don't mind possibly losing other files in there when the machine
223
restarts. Add this line (if there is none for ``/tmp`` already)::
225
none /tmp tmpfs defaults 0 0
227
With a 6-core machine and ``--parallel=fork`` using a tmpfs doubles the
228
test execution speed.
308
``bzrlib/tests/script.py`` allows users to write tests in a syntax very close to a shell session,
309
using a restricted and limited set of commands that should be enough to mimic
310
most of the behaviours.
343
``bzrlib/tests/script.py`` allows users to write tests in a syntax very
344
close to a shell session, using a restricted and limited set of commands
345
that should be enough to mimic most of the behaviours.
312
347
A script is a set of commands, each command is composed of:
332
367
The execution stops as soon as an expected output or an expected error is not
335
When no output is specified, any ouput from the command is accepted
336
and execution continue.
370
If output occurs and no output is expected, the execution stops and the
371
test fails. If unexpected output occurs on the standard error, then
372
execution stops and the test fails.
338
374
If an error occurs and no expected error is specified, the execution stops.
437
You can run files containing shell-like scripts with::
439
$ bzr test-script <script>
441
where ``<script>`` is the path to the file containing the shell-like script.
395
443
The actual use of ScriptRunner within a TestCase looks something like
400
448
def test_unshelve_keep(self):
401
449
# some setup here
402
450
script.run_script(self, '''
404
$ bzr shelve --all -m Foo
452
$ bzr shelve -q --all -m Foo
405
453
$ bzr shelve --list
407
$ bzr unshelve --keep
455
$ bzr unshelve -q --keep
408
456
$ bzr shelve --list
462
You can also test commands that read user interaction::
464
def test_confirm_action(self):
465
"""You can write tests that demonstrate user confirmation"""
466
commands.builtin_command_registry.register(cmd_test_confirm)
467
self.addCleanup(commands.builtin_command_registry.remove, 'test-confirm')
470
2>Really do it? [y/n]:
475
To avoid having to specify "-q" for all commands whose output is
476
irrelevant, the run_script() method may be passed the keyword argument
477
``null_output_matches_anything=True``. For example::
479
def test_ignoring_null_output(self):
482
$ bzr ci -m 'first revision' --unchanged
485
""", null_output_matches_anything=True)
415
488
Import tariff tests
416
489
-------------------
664
737
_test_needs_features = [features.apport]
740
Testing deprecated code
741
-----------------------
743
When code is deprecated, it is still supported for some length of time,
744
usually until the next major version. The ``applyDeprecated`` helper
745
wraps calls to deprecated code to verify that it is correctly issuing the
746
deprecation warning, and also prevents the warnings from being printed
749
Typically patches that apply the ``@deprecated_function`` decorator should
750
update the accompanying tests to use the ``applyDeprecated`` wrapper.
752
``applyDeprecated`` is defined in ``bzrlib.tests.TestCase``. See the API
753
docs for more details.
667
756
Testing exceptions and errors
668
757
-----------------------------
742
831
whether a test should be added for that particular implementation,
743
832
or for all implementations of the interface.
745
The multiplication of tests for different implementations is normally
746
accomplished by overriding the ``load_tests`` function used to load tests
747
from a module. This function typically loads all the tests, then applies
748
a TestProviderAdapter to them, which generates a longer suite containing
749
all the test variations.
751
834
See also `Per-implementation tests`_ (above).
837
Test scenarios and variations
838
-----------------------------
757
840
Some utilities are provided for generating variations of tests. This can
758
841
be used for per-implementation tests, or other cases where the same test
763
846
values to which the test should be applied. The test suite should then
764
847
also provide a list of scenarios in which to run the tests.
766
Typically ``multiply_tests_from_modules`` should be called from the test
767
module's ``load_tests`` function.
849
A single *scenario* is defined by a `(name, parameter_dict)` tuple. The
850
short string name is combined with the name of the test method to form the
851
test instance name. The parameter dict is merged into the instance's
856
load_tests = load_tests_apply_scenarios
858
class TestCheckout(TestCase):
860
scenarios = multiply_scenarios(
861
VaryByRepositoryFormat(),
865
The `load_tests` declaration or definition should be near the top of the
866
file so its effect can be seen.
870
969
Please see bzrlib.treebuilder for more details.
972
Temporarily changing state
973
~~~~~~~~~~~~~~~~~~~~~~~~~~
975
If your test needs to temporarily mutate some global state, and you need
976
it restored at the end, you can say for example::
978
self.overrideAttr(osutils, '_cached_user_encoding', 'latin-1')
983
Our base ``TestCase`` class provides an ``addCleanup`` method, which
984
should be used instead of ``tearDown``. All the cleanups are run when the
985
test finishes, regardless of whether it passes or fails. If one cleanup
986
fails, later cleanups are still run.
988
(The same facility is available outside of tests through
995
Generally we prefer automated testing but sometimes a manual test is the
996
right thing, especially for performance tests that want to measure elapsed
997
time rather than effort.
999
Simulating slow networks
1000
------------------------
1002
To get realistically slow network performance for manually measuring
1003
performance, we can simulate 500ms latency (thus 1000ms round trips)::
1005
$ sudo tc qdisc add dev lo root netem delay 500ms
1007
Normal system behaviour is restored with ::
1009
$ sudo tc qdisc del dev lo root
1011
A more precise version that only filters traffic to port 4155 is::
1013
tc qdisc add dev lo root handle 1: prio
1014
tc qdisc add dev lo parent 1:3 handle 30: netem delay 500ms
1015
tc qdisc add dev lo parent 30:1 handle 40: prio
1016
tc filter add dev lo protocol ip parent 1:0 prio 3 u32 match ip dport 4155 0xffff flowid 1:3 handle 800::800
1017
tc filter add dev lo protocol ip parent 1:0 prio 3 u32 match ip sport 4155 0xffff flowid 1:3 handle 800::801
1019
and to remove this::
1021
tc filter del dev lo protocol ip parent 1: pref 3 u32
1022
tc qdisc del dev lo root handle 1:
1024
You can use similar code to add additional delay to a real network
1025
interface, perhaps only when talking to a particular server or pointing at
1026
a VM. For more information see <http://lartc.org/>.
873
1029
.. |--| unicode:: U+2014