146
146
Bazaar can optionally produce output in the machine-readable subunit_
147
format, so that test output can be post-processed by various tools. To
148
generate a subunit test stream::
150
$ ./bzr selftest --subunit
152
Processing such a stream can be done using a variety of tools including:
154
* The builtin ``subunit2pyunit``, ``subunit-filter``, ``subunit-ls``,
155
``subunit2junitxml`` from the subunit project.
157
* tribunal_, a GUI for showing test results.
159
* testrepository_, a tool for gathering and managing test runs.
147
format, so that test output can be post-processed by various tools.
161
149
.. _subunit: https://launchpad.net/subunit/
162
.. _tribunal: https://launchpad.net/tribunal/
168
Bazaar ships with a config file for testrepository_. This can be very
169
useful for keeping track of failing tests and doing general workflow
170
support. To run tests using testrepository::
174
To run only failing tests::
176
$ testr run --failing
178
To run only some tests, without plugins::
180
$ test run test_selftest -- --no-plugins
182
See the testrepository documentation for more details.
184
.. _testrepository: https://launchpad.net/testrepository
187
Babune continuous integration
188
-----------------------------
190
We have a Hudson continuous-integration system that automatically runs
191
tests across various platforms. In the future we plan to add more
192
combinations including testing plugins. See
193
<http://babune.ladeuil.net:24842/>. (Babune = Bazaar Buildbot Network.)
196
Running tests in parallel
197
-------------------------
199
Bazaar can use subunit to spawn multiple test processes. There is
200
slightly more chance you will hit ordering or timing-dependent bugs but
203
$ ./bzr selftest --parallel=fork
205
Note that you will need the Subunit library
206
<https://launchpad.net/subunit/> to use this, which is in
207
``python-subunit`` on Ubuntu.
210
Running tests from a ramdisk
211
----------------------------
213
The tests create and delete a lot of temporary files. In some cases you
214
can make the test suite run much faster by running it on a ramdisk. For
218
$ sudo mount -t tmpfs none /ram
219
$ TMPDIR=/ram ./bzr selftest ...
221
You could also change ``/tmp`` in ``/etc/fstab`` to have type ``tmpfs``,
222
if you don't mind possibly losing other files in there when the machine
223
restarts. Add this line (if there is none for ``/tmp`` already)::
225
none /tmp tmpfs defaults 0 0
227
With a 6-core machine and ``--parallel=fork`` using a tmpfs doubles the
228
test execution speed.
430
352
The actual use of ScriptRunner within a TestCase looks something like
433
from bzrlib.tests import script
435
def test_unshelve_keep(self):
437
script.run_script(self, '''
439
$ bzr shelve --all -m Foo
442
$ bzr unshelve --keep
453
`bzrlib.tests.test_import_tariff` has some tests that measure how many
454
Python modules are loaded to run some representative commands.
456
We want to avoid loading code unnecessarily, for reasons including:
458
* Python modules are interpreted when they're loaded, either to define
459
classes or modules or perhaps to initialize some structures.
461
* With a cold cache we may incur blocking real disk IO for each module.
463
* Some modules depend on many others.
465
* Some optional modules such as `testtools` are meant to be soft
466
dependencies and only needed for particular cases. If they're loaded in
467
other cases then bzr may break for people who don't have those modules.
469
`test_import_tariff` allows us to check that removal of imports doesn't
472
This is done by running the command in a subprocess with
473
``--profile-imports``. Starting a whole Python interpreter is pretty
474
slow, so we don't want exhaustive testing here, but just enough to guard
475
against distinct fixed problems.
477
Assertions about precisely what is loaded tend to be brittle so we instead
478
make assertions that particular things aren't loaded.
480
Unless selftest is run with ``--no-plugins``, modules will be loaded in
481
the usual way and checks made on what they cause to be loaded. This is
482
probably worth checking into, because many bzr users have at least some
483
plugins installed (and they're included in binary installers).
485
In theory, plugins might have a good reason to load almost anything:
486
someone might write a plugin that opens a network connection or pops up a
487
gui window every time you run 'bzr status'. However, it's more likely
488
that the code to do these things is just being loaded accidentally. We
489
might eventually need to have a way to make exceptions for particular
492
Some things to check:
494
* non-GUI commands shouldn't load GUI libraries
496
* operations on bzr native formats sholudn't load foreign branch libraries
498
* network code shouldn't be loaded for purely local operations
500
* particularly expensive Python built-in modules shouldn't be loaded
501
unless there is a good reason
504
Testing locking behaviour
505
-------------------------
507
In order to test the locking behaviour of commands, it is possible to install
508
a hook that is called when a write lock is: acquired, released or broken.
509
(Read locks also exist, they cannot be discovered in this way.)
511
A hook can be installed by calling bzrlib.lock.Lock.hooks.install_named_hook.
512
The three valid hooks are: `lock_acquired`, `lock_released` and `lock_broken`.
519
lock.Lock.hooks.install_named_hook('lock_acquired',
520
locks_acquired.append, None)
521
lock.Lock.hooks.install_named_hook('lock_released',
522
locks_released.append, None)
524
`locks_acquired` will now receive a LockResult instance for all locks acquired
525
since the time the hook is installed.
527
The last part of the `lock_url` allows you to identify the type of object that is locked.
529
- BzrDir: `/branch-lock`
530
- Working tree: `/checkout/lock`
531
- Branch: `/branch/lock`
532
- Repository: `/repository/lock`
534
To test if a lock is a write lock on a working tree, one can do the following::
536
self.assertEndsWith(locks_acquired[0].lock_url, "/checkout/lock")
538
See bzrlib/tests/commands/test_revert.py for an example of how to use this for
355
def test_unshelve_keep(self):
358
sr.run_script(self, '''
360
$ bzr shelve --all -m Foo
363
$ bzr unshelve --keep
820
651
A base TestCase that extends the Python standard library's
821
TestCase in several ways. TestCase is build on
822
``testtools.TestCase``, which gives it support for more assertion
823
methods (e.g. ``assertContainsRe``), ``addCleanup``, and other
824
features (see its API docs for details). It also has a ``setUp`` that
825
makes sure that global state like registered hooks and loggers won't
826
interfere with your test. All tests should use this base class
827
(whether directly or via a subclass). Note that we are trying not to
828
add more assertions at this point, and instead to build up a library
829
of ``bzrlib.tests.matchers``.
652
TestCase in several ways. It adds more assertion methods (e.g.
653
``assertContainsRe``), ``addCleanup``, and other features (see its API
654
docs for details). It also has a ``setUp`` that makes sure that
655
global state like registered hooks and loggers won't interfere with
656
your test. All tests should use this base class (whether directly or
831
659
TestCaseWithMemoryTransport
832
660
Extends TestCase and adds methods like ``get_transport``,
905
733
Please see bzrlib.treebuilder for more details.
908
Temporarily changing state
909
~~~~~~~~~~~~~~~~~~~~~~~~~~
911
If your test needs to temporarily mutate some global state, and you need
912
it restored at the end, you can say for example::
914
self.overrideAttr(osutils, '_cached_user_encoding', 'latin-1')
919
Our base ``TestCase`` class provides an ``addCleanup`` method, which
920
should be used instead of ``tearDown``. All the cleanups are run when the
921
test finishes, regardless of whether it passes or fails. If one cleanup
922
fails, later cleanups are still run.
924
(The same facility is available outside of tests through
931
Generally we prefer automated testing but sometimes a manual test is the
932
right thing, especially for performance tests that want to measure elapsed
933
time rather than effort.
935
Simulating slow networks
936
------------------------
938
To get realistically slow network performance for manually measuring
939
performance, we can simulate 500ms latency (thus 1000ms round trips)::
941
$ sudo tc qdisc add dev lo root netem delay 500ms
943
Normal system behaviour is restored with ::
945
$ sudo tc qdisc del dev lo root
947
A more precise version that only filters traffic to port 4155 is::
949
tc qdisc add dev lo root handle 1: prio
950
tc qdisc add dev lo parent 1:3 handle 30: netem delay 500ms
951
tc qdisc add dev lo parent 30:1 handle 40: prio
952
tc filter add dev lo protocol ip parent 1:0 prio 3 u32 match ip dport 4155 0xffff flowid 1:3 handle 800::800
953
tc filter add dev lo protocol ip parent 1:0 prio 3 u32 match ip sport 4155 0xffff flowid 1:3 handle 800::801
957
tc filter del dev lo protocol ip parent 1: pref 3 u32
958
tc qdisc del dev lo root handle 1:
960
You can use similar code to add additional delay to a real network
961
interface, perhaps only when talking to a particular server or pointing at
962
a VM. For more information see <http://lartc.org/>.
965
736
.. |--| unicode:: U+2014
968
vim: ft=rst tw=74 ai et sw=4