blob: 75640d46b44d359dea9828de8c6807e1d94f8101 [file] [log] [blame]
Georg Brandl116aa622007-08-15 14:28:22 +00001:mod:`zlib` --- Compression compatible with :program:`gzip`
2===========================================================
3
4.. module:: zlib
Georg Brandl7f01a132009-09-16 15:58:14 +00005 :synopsis: Low-level interface to compression and decompression routines
6 compatible with gzip.
Georg Brandl116aa622007-08-15 14:28:22 +00007
8
9For applications that require data compression, the functions in this module
10allow compression and decompression, using the zlib library. The zlib library
11has its own home page at http://www.zlib.net. There are known
12incompatibilities between the Python module and versions of the zlib library
13earlier than 1.1.3; 1.1.3 has a security vulnerability, so we recommend using
141.1.4 or later.
15
16zlib's functions have many options and often need to be used in a particular
17order. This documentation doesn't attempt to cover all of the permutations;
18consult the zlib manual at http://www.zlib.net/manual.html for authoritative
19information.
20
Éric Araujof2fbb9c2012-01-16 16:55:55 +010021For reading and writing ``.gz`` files see the :mod:`gzip` module.
Guido van Rossum77677112007-11-05 19:43:04 +000022
Georg Brandl116aa622007-08-15 14:28:22 +000023The available exception and functions in this module are:
24
25
26.. exception:: error
27
28 Exception raised on compression and decompression errors.
29
30
Benjamin Peterson058e31e2009-01-16 03:54:08 +000031.. function:: adler32(data[, value])
Georg Brandl116aa622007-08-15 14:28:22 +000032
Benjamin Peterson058e31e2009-01-16 03:54:08 +000033 Computes a Adler-32 checksum of *data*. (An Adler-32 checksum is almost as
Georg Brandl116aa622007-08-15 14:28:22 +000034 reliable as a CRC32 but can be computed much more quickly.) If *value* is
35 present, it is used as the starting value of the checksum; otherwise, a fixed
36 default value is used. This allows computing a running checksum over the
Benjamin Peterson058e31e2009-01-16 03:54:08 +000037 concatenation of several inputs. The algorithm is not cryptographically
Georg Brandl116aa622007-08-15 14:28:22 +000038 strong, and should not be used for authentication or digital signatures. Since
39 the algorithm is designed for use as a checksum algorithm, it is not suitable
40 for use as a general hash algorithm.
41
Gregory P. Smithab0d8a12008-03-17 20:24:09 +000042 Always returns an unsigned 32-bit integer.
43
Benjamin Peterson058e31e2009-01-16 03:54:08 +000044.. note::
45 To generate the same numeric value across all Python versions and
46 platforms use adler32(data) & 0xffffffff. If you are only using
47 the checksum in packed binary format this is not necessary as the
Gregory P. Smithfa6cf392009-02-01 00:30:50 +000048 return value is the correct 32bit binary representation
Benjamin Peterson058e31e2009-01-16 03:54:08 +000049 regardless of sign.
50
Georg Brandl116aa622007-08-15 14:28:22 +000051
Georg Brandl4ad934f2011-01-08 21:04:25 +000052.. function:: compress(data[, level])
Georg Brandl116aa622007-08-15 14:28:22 +000053
Nadeem Vawda19e568d2012-11-11 14:04:14 +010054 Compresses the bytes in *data*, returning a bytes object containing
55 compressed data. *level* is an integer from ``0`` to ``9`` controlling the
56 level of compression; ``1`` is fastest and produces the least compression,
57 ``9`` is slowest and produces the most. ``0`` is no compression. The default
58 value is ``6``. Raises the :exc:`error` exception if any error occurs.
Georg Brandl116aa622007-08-15 14:28:22 +000059
60
61.. function:: compressobj([level])
62
63 Returns a compression object, to be used for compressing data streams that won't
Nadeem Vawda19e568d2012-11-11 14:04:14 +010064 fit into memory at once. *level* is an integer from ``0`` to ``9`` controlling
Georg Brandl116aa622007-08-15 14:28:22 +000065 the level of compression; ``1`` is fastest and produces the least compression,
Nadeem Vawda19e568d2012-11-11 14:04:14 +010066 ``9`` is slowest and produces the most. ``0`` is no compression. The default
67 value is ``6``.
Georg Brandl116aa622007-08-15 14:28:22 +000068
69
Benjamin Peterson058e31e2009-01-16 03:54:08 +000070.. function:: crc32(data[, value])
Georg Brandl116aa622007-08-15 14:28:22 +000071
72 .. index::
73 single: Cyclic Redundancy Check
74 single: checksum; Cyclic Redundancy Check
75
Benjamin Peterson058e31e2009-01-16 03:54:08 +000076 Computes a CRC (Cyclic Redundancy Check) checksum of *data*. If *value* is
Georg Brandl116aa622007-08-15 14:28:22 +000077 present, it is used as the starting value of the checksum; otherwise, a fixed
78 default value is used. This allows computing a running checksum over the
Benjamin Peterson058e31e2009-01-16 03:54:08 +000079 concatenation of several inputs. The algorithm is not cryptographically
Georg Brandl116aa622007-08-15 14:28:22 +000080 strong, and should not be used for authentication or digital signatures. Since
81 the algorithm is designed for use as a checksum algorithm, it is not suitable
82 for use as a general hash algorithm.
83
Gregory P. Smithab0d8a12008-03-17 20:24:09 +000084 Always returns an unsigned 32-bit integer.
85
Benjamin Peterson058e31e2009-01-16 03:54:08 +000086.. note::
87 To generate the same numeric value across all Python versions and
88 platforms use crc32(data) & 0xffffffff. If you are only using
89 the checksum in packed binary format this is not necessary as the
Gregory P. Smithfa6cf392009-02-01 00:30:50 +000090 return value is the correct 32bit binary representation
Benjamin Peterson058e31e2009-01-16 03:54:08 +000091 regardless of sign.
92
Georg Brandl116aa622007-08-15 14:28:22 +000093
Georg Brandl4ad934f2011-01-08 21:04:25 +000094.. function:: decompress(data[, wbits[, bufsize]])
Georg Brandl116aa622007-08-15 14:28:22 +000095
Georg Brandl4ad934f2011-01-08 21:04:25 +000096 Decompresses the bytes in *data*, returning a bytes object containing the
Georg Brandl116aa622007-08-15 14:28:22 +000097 uncompressed data. The *wbits* parameter controls the size of the window
Benjamin Peterson2614cda2010-03-21 22:36:19 +000098 buffer, and is discussed further below.
99 If *bufsize* is given, it is used as the initial size of the output
Georg Brandl116aa622007-08-15 14:28:22 +0000100 buffer. Raises the :exc:`error` exception if any error occurs.
101
102 The absolute value of *wbits* is the base two logarithm of the size of the
103 history buffer (the "window size") used when compressing data. Its absolute
104 value should be between 8 and 15 for the most recent versions of the zlib
105 library, larger values resulting in better compression at the expense of greater
Benjamin Peterson2614cda2010-03-21 22:36:19 +0000106 memory usage. When decompressing a stream, *wbits* must not be smaller
107 than the size originally used to compress the stream; using a too-small
108 value will result in an exception. The default value is therefore the
109 highest value, 15. When *wbits* is negative, the standard
Jesus Ceafb7b6682010-05-03 16:14:58 +0000110 :program:`gzip` header is suppressed.
Georg Brandl116aa622007-08-15 14:28:22 +0000111
112 *bufsize* is the initial size of the buffer used to hold decompressed data. If
113 more space is required, the buffer size will be increased as needed, so you
114 don't have to get this value exactly right; tuning it will only save a few calls
Georg Brandl60203b42010-10-06 10:11:56 +0000115 to :c:func:`malloc`. The default size is 16384.
Georg Brandl116aa622007-08-15 14:28:22 +0000116
117
118.. function:: decompressobj([wbits])
119
120 Returns a decompression object, to be used for decompressing data streams that
121 won't fit into memory at once. The *wbits* parameter controls the size of the
122 window buffer.
123
124Compression objects support the following methods:
125
126
Georg Brandl4ad934f2011-01-08 21:04:25 +0000127.. method:: Compress.compress(data)
Georg Brandl116aa622007-08-15 14:28:22 +0000128
Georg Brandl4ad934f2011-01-08 21:04:25 +0000129 Compress *data*, returning a bytes object containing compressed data for at least
130 part of the data in *data*. This data should be concatenated to the output
Georg Brandl116aa622007-08-15 14:28:22 +0000131 produced by any preceding calls to the :meth:`compress` method. Some input may
132 be kept in internal buffers for later processing.
133
134
135.. method:: Compress.flush([mode])
136
Georg Brandl4ad934f2011-01-08 21:04:25 +0000137 All pending input is processed, and a bytes object containing the remaining compressed
Georg Brandl116aa622007-08-15 14:28:22 +0000138 output is returned. *mode* can be selected from the constants
139 :const:`Z_SYNC_FLUSH`, :const:`Z_FULL_FLUSH`, or :const:`Z_FINISH`,
140 defaulting to :const:`Z_FINISH`. :const:`Z_SYNC_FLUSH` and
Georg Brandl4ad934f2011-01-08 21:04:25 +0000141 :const:`Z_FULL_FLUSH` allow compressing further bytestrings of data, while
Georg Brandl116aa622007-08-15 14:28:22 +0000142 :const:`Z_FINISH` finishes the compressed stream and prevents compressing any
143 more data. After calling :meth:`flush` with *mode* set to :const:`Z_FINISH`,
144 the :meth:`compress` method cannot be called again; the only realistic action is
145 to delete the object.
146
147
148.. method:: Compress.copy()
149
150 Returns a copy of the compression object. This can be used to efficiently
151 compress a set of data that share a common initial prefix.
152
Georg Brandl116aa622007-08-15 14:28:22 +0000153
154Decompression objects support the following methods, and two attributes:
155
156
157.. attribute:: Decompress.unused_data
158
Georg Brandl4ad934f2011-01-08 21:04:25 +0000159 A bytes object which contains any bytes past the end of the compressed data. That is,
Georg Brandl116aa622007-08-15 14:28:22 +0000160 this remains ``""`` until the last byte that contains compression data is
Georg Brandl4ad934f2011-01-08 21:04:25 +0000161 available. If the whole bytestring turned out to contain compressed data, this is
162 ``b""``, an empty bytes object.
Georg Brandl116aa622007-08-15 14:28:22 +0000163
Georg Brandl4ad934f2011-01-08 21:04:25 +0000164 The only way to determine where a bytestring of compressed data ends is by actually
Georg Brandl116aa622007-08-15 14:28:22 +0000165 decompressing it. This means that when compressed data is contained part of a
166 larger file, you can only find the end of it by reading data and feeding it
Georg Brandl4ad934f2011-01-08 21:04:25 +0000167 followed by some non-empty bytestring into a decompression object's
Georg Brandl116aa622007-08-15 14:28:22 +0000168 :meth:`decompress` method until the :attr:`unused_data` attribute is no longer
Georg Brandl4ad934f2011-01-08 21:04:25 +0000169 empty.
Georg Brandl116aa622007-08-15 14:28:22 +0000170
171
172.. attribute:: Decompress.unconsumed_tail
173
Georg Brandl4ad934f2011-01-08 21:04:25 +0000174 A bytes object that contains any data that was not consumed by the last
Georg Brandl116aa622007-08-15 14:28:22 +0000175 :meth:`decompress` call because it exceeded the limit for the uncompressed data
176 buffer. This data has not yet been seen by the zlib machinery, so you must feed
177 it (possibly with further data concatenated to it) back to a subsequent
178 :meth:`decompress` method call in order to get correct output.
179
180
Georg Brandl4ad934f2011-01-08 21:04:25 +0000181.. method:: Decompress.decompress(data[, max_length])
Georg Brandl116aa622007-08-15 14:28:22 +0000182
Georg Brandl4ad934f2011-01-08 21:04:25 +0000183 Decompress *data*, returning a bytes object containing the uncompressed data
Georg Brandl116aa622007-08-15 14:28:22 +0000184 corresponding to at least part of the data in *string*. This data should be
185 concatenated to the output produced by any preceding calls to the
186 :meth:`decompress` method. Some of the input data may be preserved in internal
187 buffers for later processing.
188
189 If the optional parameter *max_length* is supplied then the return value will be
190 no longer than *max_length*. This may mean that not all of the compressed input
191 can be processed; and unconsumed data will be stored in the attribute
Georg Brandl4ad934f2011-01-08 21:04:25 +0000192 :attr:`unconsumed_tail`. This bytestring must be passed to a subsequent call to
Georg Brandl116aa622007-08-15 14:28:22 +0000193 :meth:`decompress` if decompression is to continue. If *max_length* is not
Georg Brandl4ad934f2011-01-08 21:04:25 +0000194 supplied then the whole input is decompressed, and :attr:`unconsumed_tail` is
195 empty.
Georg Brandl116aa622007-08-15 14:28:22 +0000196
197
198.. method:: Decompress.flush([length])
199
Georg Brandl4ad934f2011-01-08 21:04:25 +0000200 All pending input is processed, and a bytes object containing the remaining
Georg Brandl116aa622007-08-15 14:28:22 +0000201 uncompressed output is returned. After calling :meth:`flush`, the
202 :meth:`decompress` method cannot be called again; the only realistic action is
203 to delete the object.
204
205 The optional parameter *length* sets the initial size of the output buffer.
206
207
208.. method:: Decompress.copy()
209
210 Returns a copy of the decompression object. This can be used to save the state
211 of the decompressor midway through the data stream in order to speed up random
212 seeks into the stream at a future point.
213
Georg Brandl116aa622007-08-15 14:28:22 +0000214
215.. seealso::
216
217 Module :mod:`gzip`
218 Reading and writing :program:`gzip`\ -format files.
219
220 http://www.zlib.net
221 The zlib library home page.
222
223 http://www.zlib.net/manual.html
224 The zlib manual explains the semantics and usage of the library's many
225 functions.
226