blob: 030f706d4ace07cc6fef535c4216505ea4a3929f [file] [log] [blame]
Georg Brandl116aa622007-08-15 14:28:22 +00001
2:mod:`zlib` --- Compression compatible with :program:`gzip`
3===========================================================
4
5.. module:: zlib
6 :synopsis: Low-level interface to compression and decompression routines compatible with
7 gzip.
8
9
10For applications that require data compression, the functions in this module
11allow compression and decompression, using the zlib library. The zlib library
12has its own home page at http://www.zlib.net. There are known
13incompatibilities between the Python module and versions of the zlib library
14earlier than 1.1.3; 1.1.3 has a security vulnerability, so we recommend using
151.1.4 or later.
16
17zlib's functions have many options and often need to be used in a particular
18order. This documentation doesn't attempt to cover all of the permutations;
19consult the zlib manual at http://www.zlib.net/manual.html for authoritative
20information.
21
Guido van Rossum77677112007-11-05 19:43:04 +000022For reading and writing ``.gz`` files see the :mod:`gzip` module. For
23other archive formats, see the :mod:`bz2`, :mod:`zipfile`, and
24:mod:`tarfile` modules.
25
Georg Brandl116aa622007-08-15 14:28:22 +000026The available exception and functions in this module are:
27
28
29.. exception:: error
30
31 Exception raised on compression and decompression errors.
32
33
Benjamin Peterson058e31e2009-01-16 03:54:08 +000034.. function:: adler32(data[, value])
Georg Brandl116aa622007-08-15 14:28:22 +000035
Benjamin Peterson058e31e2009-01-16 03:54:08 +000036 Computes a Adler-32 checksum of *data*. (An Adler-32 checksum is almost as
Georg Brandl116aa622007-08-15 14:28:22 +000037 reliable as a CRC32 but can be computed much more quickly.) If *value* is
38 present, it is used as the starting value of the checksum; otherwise, a fixed
39 default value is used. This allows computing a running checksum over the
Benjamin Peterson058e31e2009-01-16 03:54:08 +000040 concatenation of several inputs. The algorithm is not cryptographically
Georg Brandl116aa622007-08-15 14:28:22 +000041 strong, and should not be used for authentication or digital signatures. Since
42 the algorithm is designed for use as a checksum algorithm, it is not suitable
43 for use as a general hash algorithm.
44
Gregory P. Smithab0d8a12008-03-17 20:24:09 +000045 Always returns an unsigned 32-bit integer.
46
Benjamin Peterson058e31e2009-01-16 03:54:08 +000047.. note::
48 To generate the same numeric value across all Python versions and
49 platforms use adler32(data) & 0xffffffff. If you are only using
50 the checksum in packed binary format this is not necessary as the
Gregory P. Smithfa6cf392009-02-01 00:30:50 +000051 return value is the correct 32bit binary representation
Benjamin Peterson058e31e2009-01-16 03:54:08 +000052 regardless of sign.
53
Georg Brandl116aa622007-08-15 14:28:22 +000054
55.. function:: compress(string[, level])
56
57 Compresses the data in *string*, returning a string contained compressed data.
58 *level* is an integer from ``1`` to ``9`` controlling the level of compression;
59 ``1`` is fastest and produces the least compression, ``9`` is slowest and
60 produces the most. The default value is ``6``. Raises the :exc:`error`
61 exception if any error occurs.
62
63
64.. function:: compressobj([level])
65
66 Returns a compression object, to be used for compressing data streams that won't
67 fit into memory at once. *level* is an integer from ``1`` to ``9`` controlling
68 the level of compression; ``1`` is fastest and produces the least compression,
69 ``9`` is slowest and produces the most. The default value is ``6``.
70
71
Benjamin Peterson058e31e2009-01-16 03:54:08 +000072.. function:: crc32(data[, value])
Georg Brandl116aa622007-08-15 14:28:22 +000073
74 .. index::
75 single: Cyclic Redundancy Check
76 single: checksum; Cyclic Redundancy Check
77
Benjamin Peterson058e31e2009-01-16 03:54:08 +000078 Computes a CRC (Cyclic Redundancy Check) checksum of *data*. If *value* is
Georg Brandl116aa622007-08-15 14:28:22 +000079 present, it is used as the starting value of the checksum; otherwise, a fixed
80 default value is used. This allows computing a running checksum over the
Benjamin Peterson058e31e2009-01-16 03:54:08 +000081 concatenation of several inputs. The algorithm is not cryptographically
Georg Brandl116aa622007-08-15 14:28:22 +000082 strong, and should not be used for authentication or digital signatures. Since
83 the algorithm is designed for use as a checksum algorithm, it is not suitable
84 for use as a general hash algorithm.
85
Gregory P. Smithab0d8a12008-03-17 20:24:09 +000086 Always returns an unsigned 32-bit integer.
87
Benjamin Peterson058e31e2009-01-16 03:54:08 +000088.. note::
89 To generate the same numeric value across all Python versions and
90 platforms use crc32(data) & 0xffffffff. If you are only using
91 the checksum in packed binary format this is not necessary as the
Gregory P. Smithfa6cf392009-02-01 00:30:50 +000092 return value is the correct 32bit binary representation
Benjamin Peterson058e31e2009-01-16 03:54:08 +000093 regardless of sign.
94
Georg Brandl116aa622007-08-15 14:28:22 +000095
96.. function:: decompress(string[, wbits[, bufsize]])
97
98 Decompresses the data in *string*, returning a string containing the
99 uncompressed data. The *wbits* parameter controls the size of the window
100 buffer. If *bufsize* is given, it is used as the initial size of the output
101 buffer. Raises the :exc:`error` exception if any error occurs.
102
103 The absolute value of *wbits* is the base two logarithm of the size of the
104 history buffer (the "window size") used when compressing data. Its absolute
105 value should be between 8 and 15 for the most recent versions of the zlib
106 library, larger values resulting in better compression at the expense of greater
107 memory usage. The default value is 15. When *wbits* is negative, the standard
108 :program:`gzip` header is suppressed; this is an undocumented feature of the
109 zlib library, used for compatibility with :program:`unzip`'s compression file
110 format.
111
112 *bufsize* is the initial size of the buffer used to hold decompressed data. If
113 more space is required, the buffer size will be increased as needed, so you
114 don't have to get this value exactly right; tuning it will only save a few calls
115 to :cfunc:`malloc`. The default size is 16384.
116
117
118.. function:: decompressobj([wbits])
119
120 Returns a decompression object, to be used for decompressing data streams that
121 won't fit into memory at once. The *wbits* parameter controls the size of the
122 window buffer.
123
124Compression objects support the following methods:
125
126
127.. method:: Compress.compress(string)
128
129 Compress *string*, returning a string containing compressed data for at least
130 part of the data in *string*. This data should be concatenated to the output
131 produced by any preceding calls to the :meth:`compress` method. Some input may
132 be kept in internal buffers for later processing.
133
134
135.. method:: Compress.flush([mode])
136
137 All pending input is processed, and a string containing the remaining compressed
138 output is returned. *mode* can be selected from the constants
139 :const:`Z_SYNC_FLUSH`, :const:`Z_FULL_FLUSH`, or :const:`Z_FINISH`,
140 defaulting to :const:`Z_FINISH`. :const:`Z_SYNC_FLUSH` and
141 :const:`Z_FULL_FLUSH` allow compressing further strings of data, while
142 :const:`Z_FINISH` finishes the compressed stream and prevents compressing any
143 more data. After calling :meth:`flush` with *mode* set to :const:`Z_FINISH`,
144 the :meth:`compress` method cannot be called again; the only realistic action is
145 to delete the object.
146
147
148.. method:: Compress.copy()
149
150 Returns a copy of the compression object. This can be used to efficiently
151 compress a set of data that share a common initial prefix.
152
Georg Brandl116aa622007-08-15 14:28:22 +0000153
154Decompression objects support the following methods, and two attributes:
155
156
157.. attribute:: Decompress.unused_data
158
159 A string which contains any bytes past the end of the compressed data. That is,
160 this remains ``""`` until the last byte that contains compression data is
161 available. If the whole string turned out to contain compressed data, this is
162 ``""``, the empty string.
163
164 The only way to determine where a string of compressed data ends is by actually
165 decompressing it. This means that when compressed data is contained part of a
166 larger file, you can only find the end of it by reading data and feeding it
167 followed by some non-empty string into a decompression object's
168 :meth:`decompress` method until the :attr:`unused_data` attribute is no longer
169 the empty string.
170
171
172.. attribute:: Decompress.unconsumed_tail
173
174 A string that contains any data that was not consumed by the last
175 :meth:`decompress` call because it exceeded the limit for the uncompressed data
176 buffer. This data has not yet been seen by the zlib machinery, so you must feed
177 it (possibly with further data concatenated to it) back to a subsequent
178 :meth:`decompress` method call in order to get correct output.
179
180
181.. method:: Decompress.decompress(string[, max_length])
182
183 Decompress *string*, returning a string containing the uncompressed data
184 corresponding to at least part of the data in *string*. This data should be
185 concatenated to the output produced by any preceding calls to the
186 :meth:`decompress` method. Some of the input data may be preserved in internal
187 buffers for later processing.
188
189 If the optional parameter *max_length* is supplied then the return value will be
190 no longer than *max_length*. This may mean that not all of the compressed input
191 can be processed; and unconsumed data will be stored in the attribute
192 :attr:`unconsumed_tail`. This string must be passed to a subsequent call to
193 :meth:`decompress` if decompression is to continue. If *max_length* is not
194 supplied then the whole input is decompressed, and :attr:`unconsumed_tail` is an
195 empty string.
196
197
198.. method:: Decompress.flush([length])
199
200 All pending input is processed, and a string containing the remaining
201 uncompressed output is returned. After calling :meth:`flush`, the
202 :meth:`decompress` method cannot be called again; the only realistic action is
203 to delete the object.
204
205 The optional parameter *length* sets the initial size of the output buffer.
206
207
208.. method:: Decompress.copy()
209
210 Returns a copy of the decompression object. This can be used to save the state
211 of the decompressor midway through the data stream in order to speed up random
212 seeks into the stream at a future point.
213
Georg Brandl116aa622007-08-15 14:28:22 +0000214
215.. seealso::
216
217 Module :mod:`gzip`
218 Reading and writing :program:`gzip`\ -format files.
219
220 http://www.zlib.net
221 The zlib library home page.
222
223 http://www.zlib.net/manual.html
224 The zlib manual explains the semantics and usage of the library's many
225 functions.
226