10
This document describes the proposed programming interface for streaming
11
data from and into repositories. This programming interface should allow
12
a single interface for pulling data from and inserting data into a Bazaar
21
To eliminate the current requirement that extracting data from a
22
repository requires either using a slow format, or knowing the format of
23
both the source repository and the target repository.
29
Here's a brief description of use cases this interface is intended to
35
We fetch data between repositories as part of push/pull/branch operations.
36
Fetching data is currently an very interactive process with lots of
37
requests. For performance having the data be supplied in a stream will
38
improve push and pull to remote servers. For purely local operations the
39
streaming logic should help reduce memory pressure. In fetch operations
40
we always know the formats of both the source and target.
42
Smart server operations
43
~~~~~~~~~~~~~~~~~~~~~~~
45
With the smart server we support one streaming format, but this is only
46
usable when both the client and server have the same model of data, and
47
requires non-optimal IO ordering for pack to pack operations. Ideally we
48
can both provide optimal IO ordering the pack to pack case, and correct
49
ordering for pack to knits.
54
Bundles also create a stream of data for revisions from a repository.
55
Unlike fetch operations we do not know the format of the target at the
56
time the stream is created. It would be good to be able to treat bundles
57
as frozen branches and repositories, so a serialised stream should be
63
At this point we are not trying to integrate data conversion into this
64
interface, though it is likely possible.
70
Some key aspects of the described interface are discussed in this section.
75
All users of this should be able to create an appropriate stream from a
81
There should be no need to seek in a stream when inserting data from it
82
into a repository. This places an ordering constraint on streams which
83
some repositories do not need.
89
At this point serialisation of a repository stream has not been specified.
90
Some considerations to bear in mind about serialisation are worth noting
96
While there shouldn't be too many users of weave repositories anymore,
97
avoiding pathological behaviour when a weave is being read is a good idea.
98
Having the weave itself embedded in the stream is very straight forward
99
and does not need expensive on the fly extraction and re-diffing to take
105
Being able to perform random reads from a repository stream which is a
106
bundle would allow stacking a bundle and a real repository together. This
107
will need the pack container format to be used in such a way that we can
108
avoid reading more data than needed within the pack container's readv
115
This describes the interface for requesting a stream, and the programming
116
interface a stream must provide. Streams that have been serialised should
117
expose the same interface.
122
To request a stream, three parameters are needed:
124
* A revision search to select the revisions to include.
125
* A data ordering flag. There are two values for this - 'unordered' and
126
'topological'. 'unordered' streams are useful when inserting into
127
repositories that have the ability to perform atomic insertions.
128
'topological' streams are useful when converting data, or when
129
inserting into repositories that cannot perform atomic insertions (such
130
as knit or weave based repositories).
131
* A complete_inventory flag. When provided this flag signals the stream
132
generator to include all the data needed to construct the inventory of
133
each revision included in the stream, rather than just deltas. This is
134
useful when converting data from a repository with a different
135
inventory serialisation, as pure deltas would not be able to be
139
Structure of a stream
140
---------------------
142
A stream is an object. It can be consistency checked via the ``check``
143
method (which consumes the stream). The ``iter_contents`` method can be
144
used to iterate the contents of the stream. The contents of the stream are
145
a series of top level records, each of which contains one or more
146
bytestrings (potentially as a delta against another item in the
147
repository) and some optional metadata.
153
To consume a stream, obtain an iterator from the streams
154
``iter_contents`` method. This iterator will yield the top level records.
155
Each record has two attributes. One is ``key_prefix`` which is a tuple key
156
prefix for the names of each of the bytestrings in the record. The other
157
attribute is ``entries``, an iterator of the individual items in the
158
record. Each item that the iterator yields is a factory which has metadata
159
about the entry and the ability to return the compressed bytes. This
160
factory can be decorated to allow obtaining different representations (for
161
example from a compressed knit fulltext to a plain fulltext).
165
stream = repository.get_repository_stream(search, UNORDERED, False)
166
for record in stream.iter_contents():
167
for factory in record.entries:
168
compression = factory.storage_kind
169
print "Object %s, compression type %s, %d bytes long." % (
170
record.key_prefix + factory.key,
171
compression, len(factory.get_bytes_as(compression)))
173
This structure should allow stream adapters to be written which can coerce
174
all records to the type of compression that a particular client needs. For
175
instance, inserting into weaves requires fulltexts, so a stream would be
176
adapted for weaves by an adapter that takes a stream, and the target
177
weave, and then uses the target weave to reconstruct full texts (which is
178
all that the weave inserter would ask for). In a similar approach, a
179
stream could internally delta compress many fulltexts and be able to
180
answer both fulltext and compressed record requests without extra IO.
185
Valid attributes on the factory are:
186
* sha1: Optional ascii representation of the sha1 of the bytestring (after
187
delta reconstruction).
188
* storage_kind: Required kind of storage compression that has been used
189
on the bytestring. One of ``mpdiff``, ``knit-annotated-ft``,
190
``knit-annotated-delta``, ``knit-ft``, ``knit-delta``, ``fulltext``.
191
* parents: Required graph parents to associate with this bytestring.
192
* compressor_data: Required opaque data relevant to the storage_kind.
193
(This is set to None when the compressor has no special state needed)
194
* key: The key for this bytestring. Like each parent this is a tuple that
195
should have the key_prefix prepended to it to give the unified