blob: 1e9a2bcab97d3fc2a09cddfe15380ba019e682f5 [file] [log] [blame]
Georg Brandl116aa622007-08-15 14:28:22 +00001:mod:`zlib` --- Compression compatible with :program:`gzip`
2===========================================================
3
4.. module:: zlib
Georg Brandl7f01a132009-09-16 15:58:14 +00005 :synopsis: Low-level interface to compression and decompression routines
6 compatible with gzip.
Georg Brandl116aa622007-08-15 14:28:22 +00007
8
9For applications that require data compression, the functions in this module
10allow compression and decompression, using the zlib library. The zlib library
11has its own home page at http://www.zlib.net. There are known
12incompatibilities between the Python module and versions of the zlib library
13earlier than 1.1.3; 1.1.3 has a security vulnerability, so we recommend using
141.1.4 or later.
15
16zlib's functions have many options and often need to be used in a particular
17order. This documentation doesn't attempt to cover all of the permutations;
18consult the zlib manual at http://www.zlib.net/manual.html for authoritative
19information.
20
Éric Araujof2fbb9c2012-01-16 16:55:55 +010021For reading and writing ``.gz`` files see the :mod:`gzip` module.
Guido van Rossum77677112007-11-05 19:43:04 +000022
Georg Brandl116aa622007-08-15 14:28:22 +000023The available exception and functions in this module are:
24
25
26.. exception:: error
27
28 Exception raised on compression and decompression errors.
29
30
Benjamin Peterson058e31e2009-01-16 03:54:08 +000031.. function:: adler32(data[, value])
Georg Brandl116aa622007-08-15 14:28:22 +000032
Benjamin Peterson058e31e2009-01-16 03:54:08 +000033 Computes a Adler-32 checksum of *data*. (An Adler-32 checksum is almost as
Georg Brandl116aa622007-08-15 14:28:22 +000034 reliable as a CRC32 but can be computed much more quickly.) If *value* is
35 present, it is used as the starting value of the checksum; otherwise, a fixed
36 default value is used. This allows computing a running checksum over the
Benjamin Peterson058e31e2009-01-16 03:54:08 +000037 concatenation of several inputs. The algorithm is not cryptographically
Georg Brandl116aa622007-08-15 14:28:22 +000038 strong, and should not be used for authentication or digital signatures. Since
39 the algorithm is designed for use as a checksum algorithm, it is not suitable
40 for use as a general hash algorithm.
41
Gregory P. Smithab0d8a12008-03-17 20:24:09 +000042 Always returns an unsigned 32-bit integer.
43
Benjamin Peterson058e31e2009-01-16 03:54:08 +000044.. note::
45 To generate the same numeric value across all Python versions and
46 platforms use adler32(data) & 0xffffffff. If you are only using
47 the checksum in packed binary format this is not necessary as the
Gregory P. Smithfa6cf392009-02-01 00:30:50 +000048 return value is the correct 32bit binary representation
Benjamin Peterson058e31e2009-01-16 03:54:08 +000049 regardless of sign.
50
Georg Brandl116aa622007-08-15 14:28:22 +000051
Georg Brandl4ad934f2011-01-08 21:04:25 +000052.. function:: compress(data[, level])
Georg Brandl116aa622007-08-15 14:28:22 +000053
Georg Brandl4ad934f2011-01-08 21:04:25 +000054 Compresses the bytes in *data*, returning a bytes object containing compressed data.
Georg Brandl116aa622007-08-15 14:28:22 +000055 *level* is an integer from ``1`` to ``9`` controlling the level of compression;
56 ``1`` is fastest and produces the least compression, ``9`` is slowest and
57 produces the most. The default value is ``6``. Raises the :exc:`error`
58 exception if any error occurs.
59
60
61.. function:: compressobj([level])
62
63 Returns a compression object, to be used for compressing data streams that won't
64 fit into memory at once. *level* is an integer from ``1`` to ``9`` controlling
65 the level of compression; ``1`` is fastest and produces the least compression,
66 ``9`` is slowest and produces the most. The default value is ``6``.
67
68
Benjamin Peterson058e31e2009-01-16 03:54:08 +000069.. function:: crc32(data[, value])
Georg Brandl116aa622007-08-15 14:28:22 +000070
71 .. index::
72 single: Cyclic Redundancy Check
73 single: checksum; Cyclic Redundancy Check
74
Benjamin Peterson058e31e2009-01-16 03:54:08 +000075 Computes a CRC (Cyclic Redundancy Check) checksum of *data*. If *value* is
Georg Brandl116aa622007-08-15 14:28:22 +000076 present, it is used as the starting value of the checksum; otherwise, a fixed
77 default value is used. This allows computing a running checksum over the
Benjamin Peterson058e31e2009-01-16 03:54:08 +000078 concatenation of several inputs. The algorithm is not cryptographically
Georg Brandl116aa622007-08-15 14:28:22 +000079 strong, and should not be used for authentication or digital signatures. Since
80 the algorithm is designed for use as a checksum algorithm, it is not suitable
81 for use as a general hash algorithm.
82
Gregory P. Smithab0d8a12008-03-17 20:24:09 +000083 Always returns an unsigned 32-bit integer.
84
Benjamin Peterson058e31e2009-01-16 03:54:08 +000085.. note::
86 To generate the same numeric value across all Python versions and
87 platforms use crc32(data) & 0xffffffff. If you are only using
88 the checksum in packed binary format this is not necessary as the
Gregory P. Smithfa6cf392009-02-01 00:30:50 +000089 return value is the correct 32bit binary representation
Benjamin Peterson058e31e2009-01-16 03:54:08 +000090 regardless of sign.
91
Georg Brandl116aa622007-08-15 14:28:22 +000092
Georg Brandl4ad934f2011-01-08 21:04:25 +000093.. function:: decompress(data[, wbits[, bufsize]])
Georg Brandl116aa622007-08-15 14:28:22 +000094
Georg Brandl4ad934f2011-01-08 21:04:25 +000095 Decompresses the bytes in *data*, returning a bytes object containing the
Georg Brandl116aa622007-08-15 14:28:22 +000096 uncompressed data. The *wbits* parameter controls the size of the window
Benjamin Peterson2614cda2010-03-21 22:36:19 +000097 buffer, and is discussed further below.
98 If *bufsize* is given, it is used as the initial size of the output
Georg Brandl116aa622007-08-15 14:28:22 +000099 buffer. Raises the :exc:`error` exception if any error occurs.
100
101 The absolute value of *wbits* is the base two logarithm of the size of the
102 history buffer (the "window size") used when compressing data. Its absolute
103 value should be between 8 and 15 for the most recent versions of the zlib
104 library, larger values resulting in better compression at the expense of greater
Benjamin Peterson2614cda2010-03-21 22:36:19 +0000105 memory usage. When decompressing a stream, *wbits* must not be smaller
106 than the size originally used to compress the stream; using a too-small
107 value will result in an exception. The default value is therefore the
108 highest value, 15. When *wbits* is negative, the standard
Jesus Ceafb7b6682010-05-03 16:14:58 +0000109 :program:`gzip` header is suppressed.
Georg Brandl116aa622007-08-15 14:28:22 +0000110
111 *bufsize* is the initial size of the buffer used to hold decompressed data. If
112 more space is required, the buffer size will be increased as needed, so you
113 don't have to get this value exactly right; tuning it will only save a few calls
Georg Brandl60203b42010-10-06 10:11:56 +0000114 to :c:func:`malloc`. The default size is 16384.
Georg Brandl116aa622007-08-15 14:28:22 +0000115
116
117.. function:: decompressobj([wbits])
118
119 Returns a decompression object, to be used for decompressing data streams that
120 won't fit into memory at once. The *wbits* parameter controls the size of the
121 window buffer.
122
Nadeem Vawda64d25dd2011-09-12 00:04:13 +0200123
Georg Brandl116aa622007-08-15 14:28:22 +0000124Compression objects support the following methods:
125
126
Georg Brandl4ad934f2011-01-08 21:04:25 +0000127.. method:: Compress.compress(data)
Georg Brandl116aa622007-08-15 14:28:22 +0000128
Georg Brandl4ad934f2011-01-08 21:04:25 +0000129 Compress *data*, returning a bytes object containing compressed data for at least
130 part of the data in *data*. This data should be concatenated to the output
Georg Brandl116aa622007-08-15 14:28:22 +0000131 produced by any preceding calls to the :meth:`compress` method. Some input may
132 be kept in internal buffers for later processing.
133
134
135.. method:: Compress.flush([mode])
136
Georg Brandl4ad934f2011-01-08 21:04:25 +0000137 All pending input is processed, and a bytes object containing the remaining compressed
Georg Brandl116aa622007-08-15 14:28:22 +0000138 output is returned. *mode* can be selected from the constants
139 :const:`Z_SYNC_FLUSH`, :const:`Z_FULL_FLUSH`, or :const:`Z_FINISH`,
140 defaulting to :const:`Z_FINISH`. :const:`Z_SYNC_FLUSH` and
Georg Brandl4ad934f2011-01-08 21:04:25 +0000141 :const:`Z_FULL_FLUSH` allow compressing further bytestrings of data, while
Georg Brandl116aa622007-08-15 14:28:22 +0000142 :const:`Z_FINISH` finishes the compressed stream and prevents compressing any
143 more data. After calling :meth:`flush` with *mode* set to :const:`Z_FINISH`,
144 the :meth:`compress` method cannot be called again; the only realistic action is
145 to delete the object.
146
147
148.. method:: Compress.copy()
149
150 Returns a copy of the compression object. This can be used to efficiently
151 compress a set of data that share a common initial prefix.
152
Georg Brandl116aa622007-08-15 14:28:22 +0000153
Nadeem Vawda1c385462011-08-13 15:22:40 +0200154Decompression objects support the following methods and attributes:
Georg Brandl116aa622007-08-15 14:28:22 +0000155
156
157.. attribute:: Decompress.unused_data
158
Georg Brandl4ad934f2011-01-08 21:04:25 +0000159 A bytes object which contains any bytes past the end of the compressed data. That is,
Georg Brandl116aa622007-08-15 14:28:22 +0000160 this remains ``""`` until the last byte that contains compression data is
Georg Brandl4ad934f2011-01-08 21:04:25 +0000161 available. If the whole bytestring turned out to contain compressed data, this is
162 ``b""``, an empty bytes object.
Georg Brandl116aa622007-08-15 14:28:22 +0000163
Georg Brandl116aa622007-08-15 14:28:22 +0000164
165.. attribute:: Decompress.unconsumed_tail
166
Georg Brandl4ad934f2011-01-08 21:04:25 +0000167 A bytes object that contains any data that was not consumed by the last
Georg Brandl116aa622007-08-15 14:28:22 +0000168 :meth:`decompress` call because it exceeded the limit for the uncompressed data
169 buffer. This data has not yet been seen by the zlib machinery, so you must feed
170 it (possibly with further data concatenated to it) back to a subsequent
171 :meth:`decompress` method call in order to get correct output.
172
173
Nadeem Vawda1c385462011-08-13 15:22:40 +0200174.. attribute:: Decompress.eof
175
176 A boolean indicating whether the end of the compressed data stream has been
177 reached.
178
179 This makes it possible to distinguish between a properly-formed compressed
180 stream, and an incomplete or truncated one.
181
182 .. versionadded:: 3.3
183
184
Georg Brandl4ad934f2011-01-08 21:04:25 +0000185.. method:: Decompress.decompress(data[, max_length])
Georg Brandl116aa622007-08-15 14:28:22 +0000186
Georg Brandl4ad934f2011-01-08 21:04:25 +0000187 Decompress *data*, returning a bytes object containing the uncompressed data
Georg Brandl116aa622007-08-15 14:28:22 +0000188 corresponding to at least part of the data in *string*. This data should be
189 concatenated to the output produced by any preceding calls to the
190 :meth:`decompress` method. Some of the input data may be preserved in internal
191 buffers for later processing.
192
193 If the optional parameter *max_length* is supplied then the return value will be
194 no longer than *max_length*. This may mean that not all of the compressed input
195 can be processed; and unconsumed data will be stored in the attribute
Georg Brandl4ad934f2011-01-08 21:04:25 +0000196 :attr:`unconsumed_tail`. This bytestring must be passed to a subsequent call to
Georg Brandl116aa622007-08-15 14:28:22 +0000197 :meth:`decompress` if decompression is to continue. If *max_length* is not
Georg Brandl4ad934f2011-01-08 21:04:25 +0000198 supplied then the whole input is decompressed, and :attr:`unconsumed_tail` is
199 empty.
Georg Brandl116aa622007-08-15 14:28:22 +0000200
201
202.. method:: Decompress.flush([length])
203
Georg Brandl4ad934f2011-01-08 21:04:25 +0000204 All pending input is processed, and a bytes object containing the remaining
Georg Brandl116aa622007-08-15 14:28:22 +0000205 uncompressed output is returned. After calling :meth:`flush`, the
206 :meth:`decompress` method cannot be called again; the only realistic action is
207 to delete the object.
208
209 The optional parameter *length* sets the initial size of the output buffer.
210
211
212.. method:: Decompress.copy()
213
214 Returns a copy of the decompression object. This can be used to save the state
215 of the decompressor midway through the data stream in order to speed up random
216 seeks into the stream at a future point.
217
Georg Brandl116aa622007-08-15 14:28:22 +0000218
Nadeem Vawda64d25dd2011-09-12 00:04:13 +0200219Information about the version of the zlib library in use is available through
220the following constants:
221
222
223.. data:: ZLIB_VERSION
224
225 The version string of the zlib library that was used for building the module.
226 This may be different from the zlib library actually used at runtime, which
227 is available as :const:`ZLIB_RUNTIME_VERSION`.
228
Nadeem Vawda64d25dd2011-09-12 00:04:13 +0200229
230.. data:: ZLIB_RUNTIME_VERSION
231
232 The version string of the zlib library actually loaded by the interpreter.
233
234 .. versionadded:: 3.3
235
236
Georg Brandl116aa622007-08-15 14:28:22 +0000237.. seealso::
238
239 Module :mod:`gzip`
240 Reading and writing :program:`gzip`\ -format files.
241
242 http://www.zlib.net
243 The zlib library home page.
244
245 http://www.zlib.net/manual.html
246 The zlib manual explains the semantics and usage of the library's many
247 functions.
248