blob: f6d762f63bdca4bdd51c43cd25c7f04d58344f6a [file] [log] [blame]
Georg Brandl8ec7f652007-08-15 14:28:01 +00001
2:mod:`audioop` --- Manipulate raw audio data
3============================================
4
5.. module:: audioop
6 :synopsis: Manipulate raw audio data.
7
8
9The :mod:`audioop` module contains some useful operations on sound fragments.
10It operates on sound fragments consisting of signed integer samples 8, 16 or 32
11bits wide, stored in Python strings. This is the same format as used by the
12:mod:`al` and :mod:`sunaudiodev` modules. All scalar items are integers, unless
13specified otherwise.
14
15.. index::
16 single: Intel/DVI ADPCM
17 single: ADPCM, Intel/DVI
18 single: a-LAW
19 single: u-LAW
20
21This module provides support for a-LAW, u-LAW and Intel/DVI ADPCM encodings.
22
Georg Brandlb19be572007-12-29 10:57:00 +000023.. This para is mostly here to provide an excuse for the index entries...
Georg Brandl8ec7f652007-08-15 14:28:01 +000024
25A few of the more complicated operations only take 16-bit samples, otherwise the
26sample size (in bytes) is always a parameter of the operation.
27
28The module defines the following variables and functions:
29
30
31.. exception:: error
32
33 This exception is raised on all errors, such as unknown number of bytes per
34 sample, etc.
35
36
37.. function:: add(fragment1, fragment2, width)
38
39 Return a fragment which is the addition of the two samples passed as parameters.
40 *width* is the sample width in bytes, either ``1``, ``2`` or ``4``. Both
41 fragments should have the same length.
42
43
44.. function:: adpcm2lin(adpcmfragment, width, state)
45
46 Decode an Intel/DVI ADPCM coded fragment to a linear fragment. See the
47 description of :func:`lin2adpcm` for details on ADPCM coding. Return a tuple
48 ``(sample, newstate)`` where the sample has the width specified in *width*.
49
50
51.. function:: alaw2lin(fragment, width)
52
53 Convert sound fragments in a-LAW encoding to linearly encoded sound fragments.
54 a-LAW encoding always uses 8 bits samples, so *width* refers only to the sample
55 width of the output fragment here.
56
57 .. versionadded:: 2.5
58
59
60.. function:: avg(fragment, width)
61
62 Return the average over all samples in the fragment.
63
64
65.. function:: avgpp(fragment, width)
66
67 Return the average peak-peak value over all samples in the fragment. No
68 filtering is done, so the usefulness of this routine is questionable.
69
70
71.. function:: bias(fragment, width, bias)
72
73 Return a fragment that is the original fragment with a bias added to each
74 sample.
75
76
77.. function:: cross(fragment, width)
78
79 Return the number of zero crossings in the fragment passed as an argument.
80
81
82.. function:: findfactor(fragment, reference)
83
84 Return a factor *F* such that ``rms(add(fragment, mul(reference, -F)))`` is
85 minimal, i.e., return the factor with which you should multiply *reference* to
86 make it match as well as possible to *fragment*. The fragments should both
87 contain 2-byte samples.
88
89 The time taken by this routine is proportional to ``len(fragment)``.
90
91
92.. function:: findfit(fragment, reference)
93
94 Try to match *reference* as well as possible to a portion of *fragment* (which
95 should be the longer fragment). This is (conceptually) done by taking slices
96 out of *fragment*, using :func:`findfactor` to compute the best match, and
97 minimizing the result. The fragments should both contain 2-byte samples.
98 Return a tuple ``(offset, factor)`` where *offset* is the (integer) offset into
99 *fragment* where the optimal match started and *factor* is the (floating-point)
100 factor as per :func:`findfactor`.
101
102
103.. function:: findmax(fragment, length)
104
105 Search *fragment* for a slice of length *length* samples (not bytes!) with
106 maximum energy, i.e., return *i* for which ``rms(fragment[i*2:(i+length)*2])``
107 is maximal. The fragments should both contain 2-byte samples.
108
109 The routine takes time proportional to ``len(fragment)``.
110
111
112.. function:: getsample(fragment, width, index)
113
114 Return the value of sample *index* from the fragment.
115
116
117.. function:: lin2adpcm(fragment, width, state)
118
119 Convert samples to 4 bit Intel/DVI ADPCM encoding. ADPCM coding is an adaptive
120 coding scheme, whereby each 4 bit number is the difference between one sample
121 and the next, divided by a (varying) step. The Intel/DVI ADPCM algorithm has
122 been selected for use by the IMA, so it may well become a standard.
123
124 *state* is a tuple containing the state of the coder. The coder returns a tuple
125 ``(adpcmfrag, newstate)``, and the *newstate* should be passed to the next call
126 of :func:`lin2adpcm`. In the initial call, ``None`` can be passed as the state.
127 *adpcmfrag* is the ADPCM coded fragment packed 2 4-bit values per byte.
128
129
130.. function:: lin2alaw(fragment, width)
131
132 Convert samples in the audio fragment to a-LAW encoding and return this as a
133 Python string. a-LAW is an audio encoding format whereby you get a dynamic
134 range of about 13 bits using only 8 bit samples. It is used by the Sun audio
135 hardware, among others.
136
137 .. versionadded:: 2.5
138
139
140.. function:: lin2lin(fragment, width, newwidth)
141
142 Convert samples between 1-, 2- and 4-byte formats.
143
144
145.. function:: lin2ulaw(fragment, width)
146
147 Convert samples in the audio fragment to u-LAW encoding and return this as a
148 Python string. u-LAW is an audio encoding format whereby you get a dynamic
149 range of about 14 bits using only 8 bit samples. It is used by the Sun audio
150 hardware, among others.
151
152
153.. function:: minmax(fragment, width)
154
155 Return a tuple consisting of the minimum and maximum values of all samples in
156 the sound fragment.
157
158
159.. function:: max(fragment, width)
160
161 Return the maximum of the *absolute value* of all samples in a fragment.
162
163
164.. function:: maxpp(fragment, width)
165
166 Return the maximum peak-peak value in the sound fragment.
167
168
169.. function:: mul(fragment, width, factor)
170
171 Return a fragment that has all samples in the original fragment multiplied by
172 the floating-point value *factor*. Overflow is silently ignored.
173
174
175.. function:: ratecv(fragment, width, nchannels, inrate, outrate, state[, weightA[, weightB]])
176
177 Convert the frame rate of the input fragment.
178
179 *state* is a tuple containing the state of the converter. The converter returns
180 a tuple ``(newfragment, newstate)``, and *newstate* should be passed to the next
181 call of :func:`ratecv`. The initial call should pass ``None`` as the state.
182
183 The *weightA* and *weightB* arguments are parameters for a simple digital filter
184 and default to ``1`` and ``0`` respectively.
185
186
187.. function:: reverse(fragment, width)
188
189 Reverse the samples in a fragment and returns the modified fragment.
190
191
192.. function:: rms(fragment, width)
193
194 Return the root-mean-square of the fragment, i.e. ``sqrt(sum(S_i^2)/n)``.
195
196 This is a measure of the power in an audio signal.
197
198
199.. function:: tomono(fragment, width, lfactor, rfactor)
200
201 Convert a stereo fragment to a mono fragment. The left channel is multiplied by
202 *lfactor* and the right channel by *rfactor* before adding the two channels to
203 give a mono signal.
204
205
206.. function:: tostereo(fragment, width, lfactor, rfactor)
207
208 Generate a stereo fragment from a mono fragment. Each pair of samples in the
209 stereo fragment are computed from the mono sample, whereby left channel samples
210 are multiplied by *lfactor* and right channel samples by *rfactor*.
211
212
213.. function:: ulaw2lin(fragment, width)
214
215 Convert sound fragments in u-LAW encoding to linearly encoded sound fragments.
216 u-LAW encoding always uses 8 bits samples, so *width* refers only to the sample
217 width of the output fragment here.
218
219Note that operations such as :func:`mul` or :func:`max` make no distinction
220between mono and stereo fragments, i.e. all samples are treated equal. If this
221is a problem the stereo fragment should be split into two mono fragments first
222and recombined later. Here is an example of how to do that::
223
224 def mul_stereo(sample, width, lfactor, rfactor):
225 lsample = audioop.tomono(sample, width, 1, 0)
226 rsample = audioop.tomono(sample, width, 0, 1)
227 lsample = audioop.mul(sample, width, lfactor)
228 rsample = audioop.mul(sample, width, rfactor)
229 lsample = audioop.tostereo(lsample, width, 1, 0)
230 rsample = audioop.tostereo(rsample, width, 0, 1)
231 return audioop.add(lsample, rsample, width)
232
233If you use the ADPCM coder to build network packets and you want your protocol
234to be stateless (i.e. to be able to tolerate packet loss) you should not only
235transmit the data but also the state. Note that you should send the *initial*
236state (the one you passed to :func:`lin2adpcm`) along to the decoder, not the
237final state (as returned by the coder). If you want to use
238:func:`struct.struct` to store the state in binary you can code the first
239element (the predicted value) in 16 bits and the second (the delta index) in 8.
240
241The ADPCM coders have never been tried against other ADPCM coders, only against
242themselves. It could well be that I misinterpreted the standards in which case
243they will not be interoperable with the respective standards.
244
245The :func:`find\*` routines might look a bit funny at first sight. They are
246primarily meant to do echo cancellation. A reasonably fast way to do this is to
247pick the most energetic piece of the output sample, locate that in the input
248sample and subtract the whole output sample from the input sample::
249
250 def echocancel(outputdata, inputdata):
251 pos = audioop.findmax(outputdata, 800) # one tenth second
252 out_test = outputdata[pos*2:]
253 in_test = inputdata[pos*2:]
254 ipos, factor = audioop.findfit(in_test, out_test)
255 # Optional (for better cancellation):
256 # factor = audioop.findfactor(in_test[ipos*2:ipos*2+len(out_test)],
257 # out_test)
258 prefill = '\0'*(pos+ipos)*2
259 postfill = '\0'*(len(inputdata)-len(prefill)-len(outputdata))
260 outputdata = prefill + audioop.mul(outputdata,2,-factor) + postfill
261 return audioop.add(inputdata, outputdata, 2)
262