blob: 7d9859bcdcc90700903d163c66c8da8997175b4e [file] [log] [blame]
Georg Brandl8ec7f652007-08-15 14:28:01 +00001
2:mod:`zlib` --- Compression compatible with :program:`gzip`
3===========================================================
4
5.. module:: zlib
6 :synopsis: Low-level interface to compression and decompression routines compatible with
7 gzip.
8
9
10For applications that require data compression, the functions in this module
11allow compression and decompression, using the zlib library. The zlib library
12has its own home page at http://www.zlib.net. There are known
13incompatibilities between the Python module and versions of the zlib library
14earlier than 1.1.3; 1.1.3 has a security vulnerability, so we recommend using
151.1.4 or later.
16
17zlib's functions have many options and often need to be used in a particular
18order. This documentation doesn't attempt to cover all of the permutations;
19consult the zlib manual at http://www.zlib.net/manual.html for authoritative
20information.
21
Mark Summerfieldaea6e592007-11-05 09:22:48 +000022For reading and writing ``.gz`` files see the :mod:`gzip` module. For
23other archive formats, see the :mod:`bz2`, :mod:`zipfile`, and
24:mod:`tarfile` modules.
25
Georg Brandl8ec7f652007-08-15 14:28:01 +000026The available exception and functions in this module are:
27
28
29.. exception:: error
30
31 Exception raised on compression and decompression errors.
32
33
Gregory P. Smith987735c2009-01-11 17:57:54 +000034.. function:: adler32(data[, value])
Georg Brandl8ec7f652007-08-15 14:28:01 +000035
Gregory P. Smith987735c2009-01-11 17:57:54 +000036 Computes a Adler-32 checksum of *data*. (An Adler-32 checksum is almost as
Georg Brandl8ec7f652007-08-15 14:28:01 +000037 reliable as a CRC32 but can be computed much more quickly.) If *value* is
38 present, it is used as the starting value of the checksum; otherwise, a fixed
39 default value is used. This allows computing a running checksum over the
Gregory P. Smith987735c2009-01-11 17:57:54 +000040 concatenation of several inputs. The algorithm is not cryptographically
Georg Brandl8ec7f652007-08-15 14:28:01 +000041 strong, and should not be used for authentication or digital signatures. Since
42 the algorithm is designed for use as a checksum algorithm, it is not suitable
43 for use as a general hash algorithm.
44
Gregory P. Smithf48f9d32008-03-17 18:48:05 +000045 This function always returns an integer object.
46
Gregory P. Smith987735c2009-01-11 17:57:54 +000047.. note::
48 To generate the same numeric value across all Python versions and
49 platforms use adler32(data) & 0xffffffff. If you are only using
50 the checksum in packed binary format this is not necessary as the
Gregory P. Smith86cc5022009-02-01 00:24:21 +000051 return value is the correct 32bit binary representation
Gregory P. Smith987735c2009-01-11 17:57:54 +000052 regardless of sign.
53
54.. versionchanged:: 2.6
Gregory P. Smith86cc5022009-02-01 00:24:21 +000055 The return value is in the range [-2**31, 2**31-1]
56 regardless of platform. In older versions the value is
Gregory P. Smith987735c2009-01-11 17:57:54 +000057 signed on some platforms and unsigned on others.
58
59.. versionchanged:: 3.0
Gregory P. Smith86cc5022009-02-01 00:24:21 +000060 The return value is unsigned and in the range [0, 2**32-1]
Gregory P. Smith987735c2009-01-11 17:57:54 +000061 regardless of platform.
Gregory P. Smithf48f9d32008-03-17 18:48:05 +000062
Georg Brandl8ec7f652007-08-15 14:28:01 +000063
64.. function:: compress(string[, level])
65
66 Compresses the data in *string*, returning a string contained compressed data.
67 *level* is an integer from ``1`` to ``9`` controlling the level of compression;
68 ``1`` is fastest and produces the least compression, ``9`` is slowest and
69 produces the most. The default value is ``6``. Raises the :exc:`error`
70 exception if any error occurs.
71
72
73.. function:: compressobj([level])
74
75 Returns a compression object, to be used for compressing data streams that won't
76 fit into memory at once. *level* is an integer from ``1`` to ``9`` controlling
77 the level of compression; ``1`` is fastest and produces the least compression,
78 ``9`` is slowest and produces the most. The default value is ``6``.
79
80
Gregory P. Smith987735c2009-01-11 17:57:54 +000081.. function:: crc32(data[, value])
Georg Brandl8ec7f652007-08-15 14:28:01 +000082
83 .. index::
84 single: Cyclic Redundancy Check
85 single: checksum; Cyclic Redundancy Check
86
Gregory P. Smith987735c2009-01-11 17:57:54 +000087 Computes a CRC (Cyclic Redundancy Check) checksum of *data*. If *value* is
Georg Brandl8ec7f652007-08-15 14:28:01 +000088 present, it is used as the starting value of the checksum; otherwise, a fixed
89 default value is used. This allows computing a running checksum over the
Gregory P. Smith987735c2009-01-11 17:57:54 +000090 concatenation of several inputs. The algorithm is not cryptographically
Georg Brandl8ec7f652007-08-15 14:28:01 +000091 strong, and should not be used for authentication or digital signatures. Since
92 the algorithm is designed for use as a checksum algorithm, it is not suitable
93 for use as a general hash algorithm.
94
Gregory P. Smithf48f9d32008-03-17 18:48:05 +000095 This function always returns an integer object.
96
Gregory P. Smith987735c2009-01-11 17:57:54 +000097.. note::
98 To generate the same numeric value across all Python versions and
99 platforms use crc32(data) & 0xffffffff. If you are only using
100 the checksum in packed binary format this is not necessary as the
Gregory P. Smith86cc5022009-02-01 00:24:21 +0000101 return value is the correct 32bit binary representation
Gregory P. Smith987735c2009-01-11 17:57:54 +0000102 regardless of sign.
103
104.. versionchanged:: 2.6
Gregory P. Smith86cc5022009-02-01 00:24:21 +0000105 The return value is in the range [-2**31, 2**31-1]
Gregory P. Smith987735c2009-01-11 17:57:54 +0000106 regardless of platform. In older versions the value would be
107 signed on some platforms and unsigned on others.
108
109.. versionchanged:: 3.0
Gregory P. Smith86cc5022009-02-01 00:24:21 +0000110 The return value is unsigned and in the range [0, 2**32-1]
Gregory P. Smith987735c2009-01-11 17:57:54 +0000111 regardless of platform.
Gregory P. Smithf48f9d32008-03-17 18:48:05 +0000112
Georg Brandl8ec7f652007-08-15 14:28:01 +0000113
114.. function:: decompress(string[, wbits[, bufsize]])
115
116 Decompresses the data in *string*, returning a string containing the
117 uncompressed data. The *wbits* parameter controls the size of the window
Andrew M. Kuchling66dab172010-03-01 19:51:43 +0000118 buffer, and is discussed further below.
119 If *bufsize* is given, it is used as the initial size of the output
Georg Brandl8ec7f652007-08-15 14:28:01 +0000120 buffer. Raises the :exc:`error` exception if any error occurs.
121
122 The absolute value of *wbits* is the base two logarithm of the size of the
123 history buffer (the "window size") used when compressing data. Its absolute
124 value should be between 8 and 15 for the most recent versions of the zlib
125 library, larger values resulting in better compression at the expense of greater
Andrew M. Kuchling66dab172010-03-01 19:51:43 +0000126 memory usage. When decompressing a stream, *wbits* must not be smaller
127 than the size originally used to compress the stream; using a too-small
128 value will result in an exception. The default value is therefore the
129 highest value, 15. When *wbits* is negative, the standard
Georg Brandl8ec7f652007-08-15 14:28:01 +0000130 :program:`gzip` header is suppressed; this is an undocumented feature of the
131 zlib library, used for compatibility with :program:`unzip`'s compression file
132 format.
133
134 *bufsize* is the initial size of the buffer used to hold decompressed data. If
135 more space is required, the buffer size will be increased as needed, so you
136 don't have to get this value exactly right; tuning it will only save a few calls
137 to :cfunc:`malloc`. The default size is 16384.
138
139
140.. function:: decompressobj([wbits])
141
142 Returns a decompression object, to be used for decompressing data streams that
143 won't fit into memory at once. The *wbits* parameter controls the size of the
144 window buffer.
145
146Compression objects support the following methods:
147
148
149.. method:: Compress.compress(string)
150
151 Compress *string*, returning a string containing compressed data for at least
152 part of the data in *string*. This data should be concatenated to the output
153 produced by any preceding calls to the :meth:`compress` method. Some input may
154 be kept in internal buffers for later processing.
155
156
157.. method:: Compress.flush([mode])
158
159 All pending input is processed, and a string containing the remaining compressed
160 output is returned. *mode* can be selected from the constants
161 :const:`Z_SYNC_FLUSH`, :const:`Z_FULL_FLUSH`, or :const:`Z_FINISH`,
162 defaulting to :const:`Z_FINISH`. :const:`Z_SYNC_FLUSH` and
163 :const:`Z_FULL_FLUSH` allow compressing further strings of data, while
164 :const:`Z_FINISH` finishes the compressed stream and prevents compressing any
165 more data. After calling :meth:`flush` with *mode* set to :const:`Z_FINISH`,
166 the :meth:`compress` method cannot be called again; the only realistic action is
167 to delete the object.
168
169
170.. method:: Compress.copy()
171
172 Returns a copy of the compression object. This can be used to efficiently
173 compress a set of data that share a common initial prefix.
174
175 .. versionadded:: 2.5
176
177Decompression objects support the following methods, and two attributes:
178
179
180.. attribute:: Decompress.unused_data
181
182 A string which contains any bytes past the end of the compressed data. That is,
183 this remains ``""`` until the last byte that contains compression data is
184 available. If the whole string turned out to contain compressed data, this is
185 ``""``, the empty string.
186
187 The only way to determine where a string of compressed data ends is by actually
188 decompressing it. This means that when compressed data is contained part of a
189 larger file, you can only find the end of it by reading data and feeding it
190 followed by some non-empty string into a decompression object's
191 :meth:`decompress` method until the :attr:`unused_data` attribute is no longer
192 the empty string.
193
194
195.. attribute:: Decompress.unconsumed_tail
196
197 A string that contains any data that was not consumed by the last
198 :meth:`decompress` call because it exceeded the limit for the uncompressed data
199 buffer. This data has not yet been seen by the zlib machinery, so you must feed
200 it (possibly with further data concatenated to it) back to a subsequent
201 :meth:`decompress` method call in order to get correct output.
202
203
204.. method:: Decompress.decompress(string[, max_length])
205
206 Decompress *string*, returning a string containing the uncompressed data
207 corresponding to at least part of the data in *string*. This data should be
208 concatenated to the output produced by any preceding calls to the
209 :meth:`decompress` method. Some of the input data may be preserved in internal
210 buffers for later processing.
211
212 If the optional parameter *max_length* is supplied then the return value will be
213 no longer than *max_length*. This may mean that not all of the compressed input
214 can be processed; and unconsumed data will be stored in the attribute
215 :attr:`unconsumed_tail`. This string must be passed to a subsequent call to
216 :meth:`decompress` if decompression is to continue. If *max_length* is not
217 supplied then the whole input is decompressed, and :attr:`unconsumed_tail` is an
218 empty string.
219
220
221.. method:: Decompress.flush([length])
222
223 All pending input is processed, and a string containing the remaining
224 uncompressed output is returned. After calling :meth:`flush`, the
225 :meth:`decompress` method cannot be called again; the only realistic action is
226 to delete the object.
227
228 The optional parameter *length* sets the initial size of the output buffer.
229
230
231.. method:: Decompress.copy()
232
233 Returns a copy of the decompression object. This can be used to save the state
234 of the decompressor midway through the data stream in order to speed up random
235 seeks into the stream at a future point.
236
237 .. versionadded:: 2.5
238
239
240.. seealso::
241
242 Module :mod:`gzip`
243 Reading and writing :program:`gzip`\ -format files.
244
245 http://www.zlib.net
246 The zlib library home page.
247
248 http://www.zlib.net/manual.html
249 The zlib manual explains the semantics and usage of the library's many
250 functions.
251