86
104
--load-list. The later is rarely used but allows to run a subset of a list of
87
105
failing tests for example.
110
To test only the bzr core, ignoring any plugins you may have installed,
113
./bzr --no-plugins selftest
115
Disabling crash reporting
116
-------------------------
118
By default Bazaar uses apport_ to report program crashes. In developing
119
Bazaar it's normal and expected to have it crash from time to time, at
120
least because a test failed if for no other reason.
122
Therefore you should probably add ``debug_flags = no_apport`` to your
123
``bazaar.conf`` file (in ``~/.bazaar/`` on Unix), so that failures just
124
print a traceback rather than writing a crash file.
126
.. _apport: https://launchpad.net/apport/
90
129
Test suite debug flags
91
130
----------------------
97
136
This can provide useful logging to help debug test failures when used
98
137
with e.g. ``bzr -Dhpss selftest -E=allow_debug``
139
Note that this will probably cause some tests to fail, because they
140
don't expect to run with any debug flags on.
146
Bazaar can optionally produce output in the machine-readable subunit_
147
format, so that test output can be post-processed by various tools. To
148
generate a subunit test stream::
150
$ ./bzr selftest --subunit
152
Processing such a stream can be done using a variety of tools including:
154
* The builtin ``subunit2pyunit``, ``subunit-filter``, ``subunit-ls``,
155
``subunit2junitxml`` from the subunit project.
157
* tribunal_, a GUI for showing test results.
159
* testrepository_, a tool for gathering and managing test runs.
161
.. _subunit: https://launchpad.net/subunit/
162
.. _tribunal: https://launchpad.net/tribunal/
168
Bazaar ships with a config file for testrepository_. This can be very
169
useful for keeping track of failing tests and doing general workflow
170
support. To run tests using testrepository::
174
To run only failing tests::
176
$ testr run --failing
178
To run only some tests, without plugins::
180
$ test run test_selftest -- --no-plugins
182
See the testrepository documentation for more details.
184
.. _testrepository: https://launchpad.net/testrepository
187
Babune continuous integration
188
-----------------------------
190
We have a Hudson continuous-integration system that automatically runs
191
tests across various platforms. In the future we plan to add more
192
combinations including testing plugins. See
193
<http://babune.ladeuil.net:24842/>. (Babune = Bazaar Buildbot Network.)
196
Running tests in parallel
197
-------------------------
199
Bazaar can use subunit to spawn multiple test processes. There is
200
slightly more chance you will hit ordering or timing-dependent bugs but
203
$ ./bzr selftest --parallel=fork
205
Note that you will need the Subunit library
206
<https://launchpad.net/subunit/> to use this, which is in
207
``python-subunit`` on Ubuntu.
210
Running tests from a ramdisk
211
----------------------------
213
The tests create and delete a lot of temporary files. In some cases you
214
can make the test suite run much faster by running it on a ramdisk. For
218
$ sudo mount -t tmpfs none /ram
219
$ TMPDIR=/ram ./bzr selftest ...
221
You could also change ``/tmp`` in ``/etc/fstab`` to have type ``tmpfs``,
222
if you don't mind possibly losing other files in there when the machine
223
restarts. Add this line (if there is none for ``/tmp`` already)::
225
none /tmp tmpfs defaults 0 0
227
With a 6-core machine and ``--parallel=fork`` using a tmpfs doubles the
228
test execution speed.
234
Normally you should add or update a test for all bug fixes or new features
104
238
Where should I put a new test?
105
239
------------------------------
437
You can run files containing shell-like scripts with::
439
$ bzr test-script <script>
441
where ``<script>`` is the path to the file containing the shell-like script.
296
443
The actual use of ScriptRunner within a TestCase looks something like
299
def test_unshelve_keep(self):
302
sr.run_script(self, '''
304
$ bzr shelve --all -m Foo
307
$ bzr unshelve --keep
446
from bzrlib.tests import script
448
def test_unshelve_keep(self):
450
script.run_script(self, '''
452
$ bzr shelve -q --all -m Foo
455
$ bzr unshelve -q --keep
462
You can also test commands that read user interaction::
464
def test_confirm_action(self):
465
"""You can write tests that demonstrate user confirmation"""
466
commands.builtin_command_registry.register(cmd_test_confirm)
467
self.addCleanup(commands.builtin_command_registry.remove, 'test-confirm')
470
2>Really do it? [y/n]:
475
To avoid having to specify "-q" for all commands whose output is
476
irrelevant, the run_script() method may be passed the keyword argument
477
``null_output_matches_anything=True``. For example::
479
def test_ignoring_null_output(self):
482
$ bzr ci -m 'first revision' --unchanged
485
""", null_output_matches_anything=True)
491
`bzrlib.tests.test_import_tariff` has some tests that measure how many
492
Python modules are loaded to run some representative commands.
494
We want to avoid loading code unnecessarily, for reasons including:
496
* Python modules are interpreted when they're loaded, either to define
497
classes or modules or perhaps to initialize some structures.
499
* With a cold cache we may incur blocking real disk IO for each module.
501
* Some modules depend on many others.
503
* Some optional modules such as `testtools` are meant to be soft
504
dependencies and only needed for particular cases. If they're loaded in
505
other cases then bzr may break for people who don't have those modules.
507
`test_import_tariff` allows us to check that removal of imports doesn't
510
This is done by running the command in a subprocess with
511
``--profile-imports``. Starting a whole Python interpreter is pretty
512
slow, so we don't want exhaustive testing here, but just enough to guard
513
against distinct fixed problems.
515
Assertions about precisely what is loaded tend to be brittle so we instead
516
make assertions that particular things aren't loaded.
518
Unless selftest is run with ``--no-plugins``, modules will be loaded in
519
the usual way and checks made on what they cause to be loaded. This is
520
probably worth checking into, because many bzr users have at least some
521
plugins installed (and they're included in binary installers).
523
In theory, plugins might have a good reason to load almost anything:
524
someone might write a plugin that opens a network connection or pops up a
525
gui window every time you run 'bzr status'. However, it's more likely
526
that the code to do these things is just being loaded accidentally. We
527
might eventually need to have a way to make exceptions for particular
530
Some things to check:
532
* non-GUI commands shouldn't load GUI libraries
534
* operations on bzr native formats sholudn't load foreign branch libraries
536
* network code shouldn't be loaded for purely local operations
538
* particularly expensive Python built-in modules shouldn't be loaded
539
unless there is a good reason
542
Testing locking behaviour
543
-------------------------
545
In order to test the locking behaviour of commands, it is possible to install
546
a hook that is called when a write lock is: acquired, released or broken.
547
(Read locks also exist, they cannot be discovered in this way.)
549
A hook can be installed by calling bzrlib.lock.Lock.hooks.install_named_hook.
550
The three valid hooks are: `lock_acquired`, `lock_released` and `lock_broken`.
557
lock.Lock.hooks.install_named_hook('lock_acquired',
558
locks_acquired.append, None)
559
lock.Lock.hooks.install_named_hook('lock_released',
560
locks_released.append, None)
562
`locks_acquired` will now receive a LockResult instance for all locks acquired
563
since the time the hook is installed.
565
The last part of the `lock_url` allows you to identify the type of object that is locked.
567
- BzrDir: `/branch-lock`
568
- Working tree: `/checkout/lock`
569
- Branch: `/branch/lock`
570
- Repository: `/repository/lock`
572
To test if a lock is a write lock on a working tree, one can do the following::
574
self.assertEndsWith(locks_acquired[0].lock_url, "/checkout/lock")
576
See bzrlib/tests/commands/test_revert.py for an example of how to use this for
465
722
SymlinkFeature = _SymlinkFeature()
724
A helper for handling running tests based on whether a python
725
module is available. This can handle 3rd-party dependencies (is
726
``paramiko`` available?) as well as stdlib (``termios``) or
727
extension modules (``bzrlib._groupcompress_pyx``). You create a
728
new feature instance with::
730
# in bzrlib/tests/features.py
731
apport = tests.ModuleAvailableFeature('apport')
734
# then in bzrlib/tests/test_apport.py
735
class TestApportReporting(TestCaseInTempDir):
737
_test_needs_features = [features.apport]
740
Testing deprecated code
741
-----------------------
743
When code is deprecated, it is still supported for some length of time,
744
usually until the next major version. The ``applyDeprecated`` helper
745
wraps calls to deprecated code to verify that it is correctly issuing the
746
deprecation warning, and also prevents the warnings from being printed
749
Typically patches that apply the ``@deprecated_function`` decorator should
750
update the accompanying tests to use the ``applyDeprecated`` wrapper.
752
``applyDeprecated`` is defined in ``bzrlib.tests.TestCase``. See the API
753
docs for more details.
468
756
Testing exceptions and errors
469
757
-----------------------------
564
846
values to which the test should be applied. The test suite should then
565
847
also provide a list of scenarios in which to run the tests.
567
Typically ``multiply_tests_from_modules`` should be called from the test
568
module's ``load_tests`` function.
849
A single *scenario* is defined by a `(name, parameter_dict)` tuple. The
850
short string name is combined with the name of the test method to form the
851
test instance name. The parameter dict is merged into the instance's
856
load_tests = load_tests_apply_scenarios
858
class TestCheckout(TestCase):
860
variations = multiply_scenarios(
861
VaryByRepositoryFormat(),
865
The `load_tests` declaration or definition should be near the top of the
866
file so its effect can be seen.
586
884
A base TestCase that extends the Python standard library's
587
TestCase in several ways. It adds more assertion methods (e.g.
588
``assertContainsRe``), ``addCleanup``, and other features (see its API
589
docs for details). It also has a ``setUp`` that makes sure that
590
global state like registered hooks and loggers won't interfere with
591
your test. All tests should use this base class (whether directly or
885
TestCase in several ways. TestCase is build on
886
``testtools.TestCase``, which gives it support for more assertion
887
methods (e.g. ``assertContainsRe``), ``addCleanup``, and other
888
features (see its API docs for details). It also has a ``setUp`` that
889
makes sure that global state like registered hooks and loggers won't
890
interfere with your test. All tests should use this base class
891
(whether directly or via a subclass). Note that we are trying not to
892
add more assertions at this point, and instead to build up a library
893
of ``bzrlib.tests.matchers``.
594
895
TestCaseWithMemoryTransport
595
896
Extends TestCase and adds methods like ``get_transport``,
668
969
Please see bzrlib.treebuilder for more details.
972
Temporarily changing state
973
~~~~~~~~~~~~~~~~~~~~~~~~~~
975
If your test needs to temporarily mutate some global state, and you need
976
it restored at the end, you can say for example::
978
self.overrideAttr(osutils, '_cached_user_encoding', 'latin-1')
983
Our base ``TestCase`` class provides an ``addCleanup`` method, which
984
should be used instead of ``tearDown``. All the cleanups are run when the
985
test finishes, regardless of whether it passes or fails. If one cleanup
986
fails, later cleanups are still run.
988
(The same facility is available outside of tests through
995
Generally we prefer automated testing but sometimes a manual test is the
996
right thing, especially for performance tests that want to measure elapsed
997
time rather than effort.
999
Simulating slow networks
1000
------------------------
1002
To get realistically slow network performance for manually measuring
1003
performance, we can simulate 500ms latency (thus 1000ms round trips)::
1005
$ sudo tc qdisc add dev lo root netem delay 500ms
1007
Normal system behaviour is restored with ::
1009
$ sudo tc qdisc del dev lo root
1011
A more precise version that only filters traffic to port 4155 is::
1013
tc qdisc add dev lo root handle 1: prio
1014
tc qdisc add dev lo parent 1:3 handle 30: netem delay 500ms
1015
tc qdisc add dev lo parent 30:1 handle 40: prio
1016
tc filter add dev lo protocol ip parent 1:0 prio 3 u32 match ip dport 4155 0xffff flowid 1:3 handle 800::800
1017
tc filter add dev lo protocol ip parent 1:0 prio 3 u32 match ip sport 4155 0xffff flowid 1:3 handle 800::801
1019
and to remove this::
1021
tc filter del dev lo protocol ip parent 1: pref 3 u32
1022
tc qdisc del dev lo root handle 1:
1024
You can use similar code to add additional delay to a real network
1025
interface, perhaps only when talking to a particular server or pointing at
1026
a VM. For more information see <http://lartc.org/>.
671
1029
.. |--| unicode:: U+2014
1032
vim: ft=rst tw=74 ai et sw=4