blob: 2ab145081ebdf898cd01fef487635037332b4599 [file] [log] [blame]
Georg Brandl116aa622007-08-15 14:28:22 +00001
2:mod:`zlib` --- Compression compatible with :program:`gzip`
3===========================================================
4
5.. module:: zlib
6 :synopsis: Low-level interface to compression and decompression routines compatible with
7 gzip.
8
9
10For applications that require data compression, the functions in this module
11allow compression and decompression, using the zlib library. The zlib library
12has its own home page at http://www.zlib.net. There are known
13incompatibilities between the Python module and versions of the zlib library
14earlier than 1.1.3; 1.1.3 has a security vulnerability, so we recommend using
151.1.4 or later.
16
17zlib's functions have many options and often need to be used in a particular
18order. This documentation doesn't attempt to cover all of the permutations;
19consult the zlib manual at http://www.zlib.net/manual.html for authoritative
20information.
21
Guido van Rossum77677112007-11-05 19:43:04 +000022For reading and writing ``.gz`` files see the :mod:`gzip` module. For
23other archive formats, see the :mod:`bz2`, :mod:`zipfile`, and
24:mod:`tarfile` modules.
25
Georg Brandl116aa622007-08-15 14:28:22 +000026The available exception and functions in this module are:
27
28
29.. exception:: error
30
31 Exception raised on compression and decompression errors.
32
33
Benjamin Peterson058e31e2009-01-16 03:54:08 +000034.. function:: adler32(data[, value])
Georg Brandl116aa622007-08-15 14:28:22 +000035
Benjamin Peterson058e31e2009-01-16 03:54:08 +000036 Computes a Adler-32 checksum of *data*. (An Adler-32 checksum is almost as
Georg Brandl116aa622007-08-15 14:28:22 +000037 reliable as a CRC32 but can be computed much more quickly.) If *value* is
38 present, it is used as the starting value of the checksum; otherwise, a fixed
39 default value is used. This allows computing a running checksum over the
Benjamin Peterson058e31e2009-01-16 03:54:08 +000040 concatenation of several inputs. The algorithm is not cryptographically
Georg Brandl116aa622007-08-15 14:28:22 +000041 strong, and should not be used for authentication or digital signatures. Since
42 the algorithm is designed for use as a checksum algorithm, it is not suitable
43 for use as a general hash algorithm.
44
Gregory P. Smithab0d8a12008-03-17 20:24:09 +000045 Always returns an unsigned 32-bit integer.
46
Benjamin Peterson058e31e2009-01-16 03:54:08 +000047.. note::
48 To generate the same numeric value across all Python versions and
49 platforms use adler32(data) & 0xffffffff. If you are only using
50 the checksum in packed binary format this is not necessary as the
Gregory P. Smithfa6cf392009-02-01 00:30:50 +000051 return value is the correct 32bit binary representation
Benjamin Peterson058e31e2009-01-16 03:54:08 +000052 regardless of sign.
53
54.. versionchanged:: 3.0
Gregory P. Smithfa6cf392009-02-01 00:30:50 +000055 The return value is unsigned and in the range [0, 2**32-1]
Benjamin Peterson058e31e2009-01-16 03:54:08 +000056 regardless of platform.
57
Georg Brandl116aa622007-08-15 14:28:22 +000058
59.. function:: compress(string[, level])
60
61 Compresses the data in *string*, returning a string contained compressed data.
62 *level* is an integer from ``1`` to ``9`` controlling the level of compression;
63 ``1`` is fastest and produces the least compression, ``9`` is slowest and
64 produces the most. The default value is ``6``. Raises the :exc:`error`
65 exception if any error occurs.
66
67
68.. function:: compressobj([level])
69
70 Returns a compression object, to be used for compressing data streams that won't
71 fit into memory at once. *level* is an integer from ``1`` to ``9`` controlling
72 the level of compression; ``1`` is fastest and produces the least compression,
73 ``9`` is slowest and produces the most. The default value is ``6``.
74
75
Benjamin Peterson058e31e2009-01-16 03:54:08 +000076.. function:: crc32(data[, value])
Georg Brandl116aa622007-08-15 14:28:22 +000077
78 .. index::
79 single: Cyclic Redundancy Check
80 single: checksum; Cyclic Redundancy Check
81
Benjamin Peterson058e31e2009-01-16 03:54:08 +000082 Computes a CRC (Cyclic Redundancy Check) checksum of *data*. If *value* is
Georg Brandl116aa622007-08-15 14:28:22 +000083 present, it is used as the starting value of the checksum; otherwise, a fixed
84 default value is used. This allows computing a running checksum over the
Benjamin Peterson058e31e2009-01-16 03:54:08 +000085 concatenation of several inputs. The algorithm is not cryptographically
Georg Brandl116aa622007-08-15 14:28:22 +000086 strong, and should not be used for authentication or digital signatures. Since
87 the algorithm is designed for use as a checksum algorithm, it is not suitable
88 for use as a general hash algorithm.
89
Gregory P. Smithab0d8a12008-03-17 20:24:09 +000090 Always returns an unsigned 32-bit integer.
91
Benjamin Peterson058e31e2009-01-16 03:54:08 +000092.. note::
93 To generate the same numeric value across all Python versions and
94 platforms use crc32(data) & 0xffffffff. If you are only using
95 the checksum in packed binary format this is not necessary as the
Gregory P. Smithfa6cf392009-02-01 00:30:50 +000096 return value is the correct 32bit binary representation
Benjamin Peterson058e31e2009-01-16 03:54:08 +000097 regardless of sign.
98
99.. versionchanged:: 3.0
Gregory P. Smithfa6cf392009-02-01 00:30:50 +0000100 The return value is unsigned and in the range [0, 2**32-1]
Benjamin Peterson058e31e2009-01-16 03:54:08 +0000101 regardless of platform.
102
Georg Brandl116aa622007-08-15 14:28:22 +0000103
104.. function:: decompress(string[, wbits[, bufsize]])
105
106 Decompresses the data in *string*, returning a string containing the
107 uncompressed data. The *wbits* parameter controls the size of the window
108 buffer. If *bufsize* is given, it is used as the initial size of the output
109 buffer. Raises the :exc:`error` exception if any error occurs.
110
111 The absolute value of *wbits* is the base two logarithm of the size of the
112 history buffer (the "window size") used when compressing data. Its absolute
113 value should be between 8 and 15 for the most recent versions of the zlib
114 library, larger values resulting in better compression at the expense of greater
115 memory usage. The default value is 15. When *wbits* is negative, the standard
116 :program:`gzip` header is suppressed; this is an undocumented feature of the
117 zlib library, used for compatibility with :program:`unzip`'s compression file
118 format.
119
120 *bufsize* is the initial size of the buffer used to hold decompressed data. If
121 more space is required, the buffer size will be increased as needed, so you
122 don't have to get this value exactly right; tuning it will only save a few calls
123 to :cfunc:`malloc`. The default size is 16384.
124
125
126.. function:: decompressobj([wbits])
127
128 Returns a decompression object, to be used for decompressing data streams that
129 won't fit into memory at once. The *wbits* parameter controls the size of the
130 window buffer.
131
132Compression objects support the following methods:
133
134
135.. method:: Compress.compress(string)
136
137 Compress *string*, returning a string containing compressed data for at least
138 part of the data in *string*. This data should be concatenated to the output
139 produced by any preceding calls to the :meth:`compress` method. Some input may
140 be kept in internal buffers for later processing.
141
142
143.. method:: Compress.flush([mode])
144
145 All pending input is processed, and a string containing the remaining compressed
146 output is returned. *mode* can be selected from the constants
147 :const:`Z_SYNC_FLUSH`, :const:`Z_FULL_FLUSH`, or :const:`Z_FINISH`,
148 defaulting to :const:`Z_FINISH`. :const:`Z_SYNC_FLUSH` and
149 :const:`Z_FULL_FLUSH` allow compressing further strings of data, while
150 :const:`Z_FINISH` finishes the compressed stream and prevents compressing any
151 more data. After calling :meth:`flush` with *mode* set to :const:`Z_FINISH`,
152 the :meth:`compress` method cannot be called again; the only realistic action is
153 to delete the object.
154
155
156.. method:: Compress.copy()
157
158 Returns a copy of the compression object. This can be used to efficiently
159 compress a set of data that share a common initial prefix.
160
Georg Brandl116aa622007-08-15 14:28:22 +0000161
162Decompression objects support the following methods, and two attributes:
163
164
165.. attribute:: Decompress.unused_data
166
167 A string which contains any bytes past the end of the compressed data. That is,
168 this remains ``""`` until the last byte that contains compression data is
169 available. If the whole string turned out to contain compressed data, this is
170 ``""``, the empty string.
171
172 The only way to determine where a string of compressed data ends is by actually
173 decompressing it. This means that when compressed data is contained part of a
174 larger file, you can only find the end of it by reading data and feeding it
175 followed by some non-empty string into a decompression object's
176 :meth:`decompress` method until the :attr:`unused_data` attribute is no longer
177 the empty string.
178
179
180.. attribute:: Decompress.unconsumed_tail
181
182 A string that contains any data that was not consumed by the last
183 :meth:`decompress` call because it exceeded the limit for the uncompressed data
184 buffer. This data has not yet been seen by the zlib machinery, so you must feed
185 it (possibly with further data concatenated to it) back to a subsequent
186 :meth:`decompress` method call in order to get correct output.
187
188
189.. method:: Decompress.decompress(string[, max_length])
190
191 Decompress *string*, returning a string containing the uncompressed data
192 corresponding to at least part of the data in *string*. This data should be
193 concatenated to the output produced by any preceding calls to the
194 :meth:`decompress` method. Some of the input data may be preserved in internal
195 buffers for later processing.
196
197 If the optional parameter *max_length* is supplied then the return value will be
198 no longer than *max_length*. This may mean that not all of the compressed input
199 can be processed; and unconsumed data will be stored in the attribute
200 :attr:`unconsumed_tail`. This string must be passed to a subsequent call to
201 :meth:`decompress` if decompression is to continue. If *max_length* is not
202 supplied then the whole input is decompressed, and :attr:`unconsumed_tail` is an
203 empty string.
204
205
206.. method:: Decompress.flush([length])
207
208 All pending input is processed, and a string containing the remaining
209 uncompressed output is returned. After calling :meth:`flush`, the
210 :meth:`decompress` method cannot be called again; the only realistic action is
211 to delete the object.
212
213 The optional parameter *length* sets the initial size of the output buffer.
214
215
216.. method:: Decompress.copy()
217
218 Returns a copy of the decompression object. This can be used to save the state
219 of the decompressor midway through the data stream in order to speed up random
220 seeks into the stream at a future point.
221
Georg Brandl116aa622007-08-15 14:28:22 +0000222
223.. seealso::
224
225 Module :mod:`gzip`
226 Reading and writing :program:`gzip`\ -format files.
227
228 http://www.zlib.net
229 The zlib library home page.
230
231 http://www.zlib.net/manual.html
232 The zlib manual explains the semantics and usage of the library's many
233 functions.
234