blob: 98677e14934510b0b1edb5555f80d6fabe130308 [file] [log] [blame]
Tim Petersf5f6c432001-05-23 07:46:36 +00001 Writing Python Regression Tests
2 -------------------------------
3 Skip Montanaro
4 (skip@mojam.com)
Skip Montanaroe9e5dcd2000-07-19 17:19:49 +00005
6
7Introduction
Skip Montanaro47c60ec2000-06-30 06:08:35 +00008
9If you add a new module to Python or modify the functionality of an existing
Skip Montanaroe9e5dcd2000-07-19 17:19:49 +000010module, you should write one or more test cases to exercise that new
Fred Drakea6daad22001-05-23 04:57:49 +000011functionality. There are different ways to do this within the regression
12testing facility provided with Python; any particular test should use only
13one of these options. Each option requires writing a test module using the
14conventions of the the selected option:
Skip Montanaro47c60ec2000-06-30 06:08:35 +000015
Fred Drakea6daad22001-05-23 04:57:49 +000016 - PyUnit based tests
17 - doctest based tests
18 - "traditional" Python test modules
19
20Regardless of the mechanics of the testing approach you choose,
21you will be writing unit tests (isolated tests of functions and objects
Skip Montanaroe9e5dcd2000-07-19 17:19:49 +000022defined by the module) using white box techniques. Unlike black box
23testing, where you only have the external interfaces to guide your test case
24writing, in white box testing you can see the code being tested and tailor
25your test cases to exercise it more completely. In particular, you will be
26able to refer to the C and Python code in the CVS repository when writing
27your regression test cases.
Skip Montanaro47c60ec2000-06-30 06:08:35 +000028
Tim Petersf5f6c432001-05-23 07:46:36 +000029
Fred Drakea6daad22001-05-23 04:57:49 +000030PyUnit based tests
31
32The PyUnit framework is based on the ideas of unit testing as espoused
33by Kent Beck and the Extreme Programming (XP) movement. The specific
34interface provided by the framework is tightly based on the JUnit
35Java implementation of Beck's original SmallTalk test framework. Please
36see the documentation of the unittest module for detailed information on
37the interface and general guidelines on writing PyUnit based tests.
38
39The test_support helper module provides a single function for use by
40PyUnit based tests in the Python regression testing framework:
41run_unittest() takes a unittest.TestCase derived class as a parameter
42and runs the tests defined in that class. All test methods in the
43Python regression framework have names that start with "test_" and use
44lower-case names with words separated with underscores.
45
Tim Petersf5f6c432001-05-23 07:46:36 +000046
Fred Drakea6daad22001-05-23 04:57:49 +000047doctest based tests
48
49Tests written to use doctest are actually part of the docstrings for
50the module being tested. Each test is written as a display of an
51interactive session, including the Python prompts, statements that would
52be typed by the user, and the output of those statements (including
Tim Petersf5f6c432001-05-23 07:46:36 +000053tracebacks, although only the exception msg needs to be retained then).
54The module in the test package is simply a wrapper that causes doctest
55to run over the tests in the module. The test for the difflib module
56provides a convenient example:
Fred Drakea6daad22001-05-23 04:57:49 +000057
Tim Petersf5f6c432001-05-23 07:46:36 +000058 from test_support import verbose
59 import doctest, difflib
60 doctest.testmod(difflib, verbose=verbose)
61
62If the test is successful, nothing is written to stdout (so you should not
63create a corresponding output/test_difflib file), but running regrtest
64with -v will give a detailed report, the same as if passing -v to doctest
65(that's what importing verbose from test_support accomplishes).
Fred Drakea6daad22001-05-23 04:57:49 +000066
67See the documentation for the doctest module for information on
68writing tests using the doctest framework.
69
Tim Petersf5f6c432001-05-23 07:46:36 +000070
Fred Drakea6daad22001-05-23 04:57:49 +000071"traditional" Python test modules
72
73The mechanics of how the "traditional" test system operates are fairly
74straightforward. When a test case is run, the output is compared with the
75expected output that is stored in .../Lib/test/output. If the test runs to
76completion and the actual and expected outputs match, the test succeeds, if
77not, it fails. If an ImportError or test_support.TestSkipped error is
78raised, the test is not run.
79
Skip Montanaroe9e5dcd2000-07-19 17:19:49 +000080
81Executing Test Cases
82
83If you are writing test cases for module spam, you need to create a file
Tim Petersf5f6c432001-05-23 07:46:36 +000084in .../Lib/test named test_spam.py. In addition, if the tests are expected
85to write to stdout during a successful run, you also need to create an
86expected output file in .../Lib/test/output named test_spam ("..."
87represents the top-level directory in the Python source tree, the directory
88containing the configure script). If needed, generate the initial version
89of the test output file by executing:
Skip Montanaroe9e5dcd2000-07-19 17:19:49 +000090
91 ./python Lib/test/regrtest.py -g test_spam.py
92
Tim Petersf5f6c432001-05-23 07:46:36 +000093from the top-level directory.
Fred Drakea6daad22001-05-23 04:57:49 +000094
Skip Montanaroe9e5dcd2000-07-19 17:19:49 +000095Any time you modify test_spam.py you need to generate a new expected
Skip Montanaro47c60ec2000-06-30 06:08:35 +000096output file. Don't forget to desk check the generated output to make sure
Tim Petersf5f6c432001-05-23 07:46:36 +000097it's really what you expected to find! All in all it's usually better
98not to have an expected-out file (note that doctest- and unittest-based
99tests do not).
100
101To run a single test after modifying a module, simply run regrtest.py
102without the -g flag:
Skip Montanaro47c60ec2000-06-30 06:08:35 +0000103
Skip Montanaroe9e5dcd2000-07-19 17:19:49 +0000104 ./python Lib/test/regrtest.py test_spam.py
105
106While debugging a regression test, you can of course execute it
107independently of the regression testing framework and see what it prints:
108
109 ./python Lib/test/test_spam.py
Skip Montanaro47c60ec2000-06-30 06:08:35 +0000110
Tim Petersf5f6c432001-05-23 07:46:36 +0000111To run the entire test suite:
112
113[UNIX, + other platforms where "make" works] Make the "test" target at the
114top level:
Skip Montanaro47c60ec2000-06-30 06:08:35 +0000115
Skip Montanaro47c60ec2000-06-30 06:08:35 +0000116 make test
117
Tim Petersf5f6c432001-05-23 07:46:36 +0000118{WINDOWS] Run rt.bat from your PCBuild directory. Read the comments at
119the top of rt.bat for the use of special -d, -O and -q options processed
120by rt.bat.
121
122[OTHER] You can simply execute the two runs of regrtest (optimized and
123non-optimized) directly:
Skip Montanaroe9e5dcd2000-07-19 17:19:49 +0000124
125 ./python Lib/test/regrtest.py
126 ./python -O Lib/test/regrtest.py
127
Tim Petersf5f6c432001-05-23 07:46:36 +0000128But note that this way picks up whatever .pyc and .pyo files happen to be
129around. The makefile and rt.bat ways run the tests twice, the first time
130removing all .pyc and .pyo files from the subtree rooted at Lib/.
Skip Montanaroe9e5dcd2000-07-19 17:19:49 +0000131
132Test cases generate output based upon values computed by the test code.
133When executed, regrtest.py compares the actual output generated by executing
134the test case with the expected output and reports success or failure. It
135stands to reason that if the actual and expected outputs are to match, they
136must not contain any machine dependencies. This means your test cases
137should not print out absolute machine addresses (e.g. the return value of
138the id() builtin function) or floating point numbers with large numbers of
139significant digits (unless you understand what you are doing!).
140
141
142Test Case Writing Tips
Skip Montanaro47c60ec2000-06-30 06:08:35 +0000143
144Writing good test cases is a skilled task and is too complex to discuss in
145detail in this short document. Many books have been written on the subject.
146I'll show my age by suggesting that Glenford Myers' "The Art of Software
147Testing", published in 1979, is still the best introduction to the subject
148available. It is short (177 pages), easy to read, and discusses the major
149elements of software testing, though its publication predates the
150object-oriented software revolution, so doesn't cover that subject at all.
151Unfortunately, it is very expensive (about $100 new). If you can borrow it
152or find it used (around $20), I strongly urge you to pick up a copy.
153
Skip Montanaro47c60ec2000-06-30 06:08:35 +0000154The most important goal when writing test cases is to break things. A test
Skip Montanaroe9e5dcd2000-07-19 17:19:49 +0000155case that doesn't uncover a bug is much less valuable than one that does.
156In designing test cases you should pay attention to the following:
Skip Montanaro47c60ec2000-06-30 06:08:35 +0000157
Skip Montanaroe9e5dcd2000-07-19 17:19:49 +0000158 * Your test cases should exercise all the functions and objects defined
159 in the module, not just the ones meant to be called by users of your
160 module. This may require you to write test code that uses the module
161 in ways you don't expect (explicitly calling internal functions, for
162 example - see test_atexit.py).
Skip Montanaro47c60ec2000-06-30 06:08:35 +0000163
Skip Montanaroe9e5dcd2000-07-19 17:19:49 +0000164 * You should consider any boundary values that may tickle exceptional
165 conditions (e.g. if you were writing regression tests for division,
166 you might well want to generate tests with numerators and denominators
167 at the limits of floating point and integer numbers on the machine
168 performing the tests as well as a denominator of zero).
Skip Montanaro47c60ec2000-06-30 06:08:35 +0000169
Skip Montanaroe9e5dcd2000-07-19 17:19:49 +0000170 * You should exercise as many paths through the code as possible. This
171 may not always be possible, but is a goal to strive for. In
172 particular, when considering if statements (or their equivalent), you
173 want to create test cases that exercise both the true and false
174 branches. For loops, you should create test cases that exercise the
175 loop zero, one and multiple times.
176
177 * You should test with obviously invalid input. If you know that a
178 function requires an integer input, try calling it with other types of
179 objects to see how it responds.
180
181 * You should test with obviously out-of-range input. If the domain of a
182 function is only defined for positive integers, try calling it with a
183 negative integer.
184
185 * If you are going to fix a bug that wasn't uncovered by an existing
186 test, try to write a test case that exposes the bug (preferably before
187 fixing it).
188
Fred Drake44b6bd22000-10-23 16:37:14 +0000189 * If you need to create a temporary file, you can use the filename in
190 test_support.TESTFN to do so. It is important to remove the file
191 when done; other tests should be able to use the name without cleaning
192 up after your test.
193
Skip Montanaroe9e5dcd2000-07-19 17:19:49 +0000194
195Regression Test Writing Rules
196
197Each test case is different. There is no "standard" form for a Python
Tim Petersf5f6c432001-05-23 07:46:36 +0000198regression test case, though there are some general rules (note that
199these mostly apply only to the "classic" tests; unittest- and doctest-
200based tests should follow the conventions natural to those frameworks):
Skip Montanaroe9e5dcd2000-07-19 17:19:49 +0000201
202 * If your test case detects a failure, raise TestFailed (found in
203 test_support).
204
205 * Import everything you'll need as early as possible.
206
207 * If you'll be importing objects from a module that is at least
208 partially platform-dependent, only import those objects you need for
209 the current test case to avoid spurious ImportError exceptions that
210 prevent the test from running to completion.
211
212 * Print all your test case results using the print statement. For
213 non-fatal errors, print an error message (or omit a successful
214 completion print) to indicate the failure, but proceed instead of
215 raising TestFailed.
216
Tim Petersa48b5262000-08-23 05:28:45 +0000217 * Use "assert" sparingly, if at all. It's usually better to just print
218 what you got, and rely on regrtest's got-vs-expected comparison to
219 catch deviations from what you expect. assert statements aren't
220 executed at all when regrtest is run in -O mode; and, because they
221 cause the test to stop immediately, can lead to a long & tedious
222 test-fix, test-fix, test-fix, ... cycle when things are badly broken
223 (and note that "badly broken" often includes running the test suite
224 for the first time on new platforms or under new implementations of
225 the language).
226
Skip Montanaroe9e5dcd2000-07-19 17:19:49 +0000227
228Miscellaneous
229
230There is a test_support module you can import from your test case. It
231provides the following useful objects:
232
233 * TestFailed - raise this exception when your regression test detects a
234 failure.
235
Fred Drake62c53dd2000-08-21 16:55:57 +0000236 * TestSkipped - raise this if the test could not be run because the
237 platform doesn't offer all the required facilities (like large
238 file support), even if all the required modules are available.
239
Skip Montanaroe9e5dcd2000-07-19 17:19:49 +0000240 * verbose - you can use this variable to control print output. Many
241 modules use it. Search for "verbose" in the test_*.py files to see
242 lots of examples.
243
Tim Petersf5f6c432001-05-23 07:46:36 +0000244 * verify(condition, reason='test failed'). Use this instead of
245
246 assert condition[, reason]
247
248 verify() has two advantages over assert: it works even in -O mode,
249 and it raises TestFailed on failure instead of AssertionError.
250
251 * TESTFN - a string that should always be used as the filename when you
252 need to create a temp file. Also use try/finally to ensure that your
253 temp files are deleted before your test completes. Note that you
254 cannot unlink an open file on all operating systems, so also be sure
255 to close temp files before trying to unlink them.
256
257 * sortdict(dict) - acts like repr(dict.items()), but sorts the items
258 first. This is important when printing a dict value, because the
259 order of items produced by dict.items() is not defined by the
260 language.
261
262 * findfile(file) - you can call this function to locate a file somewhere
263 along sys.path or in the Lib/test tree - see test_linuxaudiodev.py for
264 an example of its use.
265
Tim Petersa48b5262000-08-23 05:28:45 +0000266 * use_large_resources - true iff tests requiring large time or space
267 should be run.
268
Skip Montanaroe9e5dcd2000-07-19 17:19:49 +0000269 * fcmp(x,y) - you can call this function to compare two floating point
270 numbers when you expect them to only be approximately equal withing a
271 fuzz factor (test_support.FUZZ, which defaults to 1e-6).
272
Tim Petersa48b5262000-08-23 05:28:45 +0000273NOTE: Always import something from test_support like so:
274
275 from test_support import verbose
276
277or like so:
278
279 import test_support
280 ... use test_support.verbose in the code ...
281
282Never import anything from test_support like this:
283
284 from test.test_support import verbose
285
286"test" is a package already, so can refer to modules it contains without
287"test." qualification. If you do an explicit "test.xxx" qualification, that
288can fool Python into believing test.xxx is a module distinct from the xxx
289in the current package, and you can end up importing two distinct copies of
290xxx. This is especially bad if xxx=test_support, as regrtest.py can (and
291routinely does) overwrite its "verbose" and "use_large_resources"
292attributes: if you get a second copy of test_support loaded, it may not
293have the same values for those as regrtest intended.
294
295
Skip Montanaroe9e5dcd2000-07-19 17:19:49 +0000296Python and C statement coverage results are currently available at
297
298 http://www.musi-cal.com/~skip/python/Python/dist/src/
299
300As of this writing (July, 2000) these results are being generated nightly.
301You can refer to the summaries and the test coverage output files to see
302where coverage is adequate or lacking and write test cases to beef up the
303coverage.
Tim Petersf5f6c432001-05-23 07:46:36 +0000304
305
306Some Non-Obvious regrtest Features
307
308 * Automagic test detection: When you create a new test file
309 test_spam.py, you do not need to modify regrtest (or anything else)
310 to advertise its existence. regrtest searches for and runs all
311 modules in the test directory with names of the form test_xxx.py.
312
313 * Miranda output: If, when running test_spam.py, regrtest does not
314 find an expected-output file test/output/test_spam, regrtest
315 pretends that it did find one, containing the single line
316
317 test_spam
318
319 This allows new tests that don't expect to print anything to stdout
320 to not bother creating expected-output files.
321
322 * Two-stage testing: To run test_spam.py, regrtest imports test_spam
323 as a module. Most tests run to completion as a side-effect of
324 getting imported. After importing test_spam, regrtest also executes
325 test_spam.test_main(), if test_spam has a "test_main" attribute.
326 This is rarely needed, and you shouldn't create a module global
327 with name test_main unless you're specifically exploiting this
328 gimmick. In such cases, please put a comment saying so near your
329 def test_main, because this feature is so rarely used it's not
330 obvious when reading the test code.