| Guido van Rossum | 54a069f | 2001-05-23 13:24:30 +0000 | [diff] [blame] | 1 |                       Writing Python Regression Tests | 
 | 2 |                       ------------------------------- | 
 | 3 |                                Skip Montanaro | 
 | 4 |                               (skip@mojam.com) | 
| Skip Montanaro | e9e5dcd | 2000-07-19 17:19:49 +0000 | [diff] [blame] | 5 |  | 
 | 6 |  | 
 | 7 | Introduction | 
| Skip Montanaro | 47c60ec | 2000-06-30 06:08:35 +0000 | [diff] [blame] | 8 |  | 
 | 9 | If you add a new module to Python or modify the functionality of an existing | 
| Skip Montanaro | e9e5dcd | 2000-07-19 17:19:49 +0000 | [diff] [blame] | 10 | module, you should write one or more test cases to exercise that new | 
| Fred Drake | a6daad2 | 2001-05-23 04:57:49 +0000 | [diff] [blame] | 11 | functionality.  There are different ways to do this within the regression | 
 | 12 | testing facility provided with Python; any particular test should use only | 
 | 13 | one of these options.  Each option requires writing a test module using the | 
 | 14 | conventions of the the selected option: | 
| Skip Montanaro | 47c60ec | 2000-06-30 06:08:35 +0000 | [diff] [blame] | 15 |  | 
| Fred Drake | a6daad2 | 2001-05-23 04:57:49 +0000 | [diff] [blame] | 16 |     - PyUnit based tests | 
 | 17 |     - doctest based tests | 
 | 18 |     - "traditional" Python test modules | 
 | 19 |  | 
 | 20 | Regardless of the mechanics of the testing approach you choose, | 
 | 21 | you will be writing unit tests (isolated tests of functions and objects | 
| Skip Montanaro | e9e5dcd | 2000-07-19 17:19:49 +0000 | [diff] [blame] | 22 | defined by the module) using white box techniques.  Unlike black box | 
 | 23 | testing, where you only have the external interfaces to guide your test case | 
 | 24 | writing, in white box testing you can see the code being tested and tailor | 
 | 25 | your test cases to exercise it more completely.  In particular, you will be | 
 | 26 | able to refer to the C and Python code in the CVS repository when writing | 
 | 27 | your regression test cases. | 
| Skip Montanaro | 47c60ec | 2000-06-30 06:08:35 +0000 | [diff] [blame] | 28 |  | 
| Tim Peters | f5f6c43 | 2001-05-23 07:46:36 +0000 | [diff] [blame] | 29 |  | 
| Fred Drake | a6daad2 | 2001-05-23 04:57:49 +0000 | [diff] [blame] | 30 | PyUnit based tests | 
 | 31 |  | 
 | 32 | The PyUnit framework is based on the ideas of unit testing as espoused | 
 | 33 | by Kent Beck and the Extreme Programming (XP) movement.  The specific | 
 | 34 | interface provided by the framework is tightly based on the JUnit | 
 | 35 | Java implementation of Beck's original SmallTalk test framework.  Please | 
 | 36 | see the documentation of the unittest module for detailed information on | 
 | 37 | the interface and general guidelines on writing PyUnit based tests. | 
 | 38 |  | 
 | 39 | The test_support helper module provides a single function for use by | 
 | 40 | PyUnit based tests in the Python regression testing framework: | 
 | 41 | run_unittest() takes a unittest.TestCase derived class as a parameter | 
 | 42 | and runs the tests defined in that class.  All test methods in the | 
 | 43 | Python regression framework have names that start with "test_" and use | 
 | 44 | lower-case names with words separated with underscores. | 
 | 45 |  | 
| Tim Peters | f5f6c43 | 2001-05-23 07:46:36 +0000 | [diff] [blame] | 46 |  | 
| Fred Drake | a6daad2 | 2001-05-23 04:57:49 +0000 | [diff] [blame] | 47 | doctest based tests | 
 | 48 |  | 
 | 49 | Tests written to use doctest are actually part of the docstrings for | 
 | 50 | the module being tested.  Each test is written as a display of an | 
 | 51 | interactive session, including the Python prompts, statements that would | 
 | 52 | be typed by the user, and the output of those statements (including | 
| Tim Peters | f5f6c43 | 2001-05-23 07:46:36 +0000 | [diff] [blame] | 53 | tracebacks, although only the exception msg needs to be retained then). | 
 | 54 | The module in the test package is simply a wrapper that causes doctest | 
 | 55 | to run over the tests in the module.  The test for the difflib module | 
 | 56 | provides a convenient example: | 
| Fred Drake | a6daad2 | 2001-05-23 04:57:49 +0000 | [diff] [blame] | 57 |  | 
| Tim Peters | a0a6222 | 2001-09-09 06:12:01 +0000 | [diff] [blame] | 58 |     import difflib, test_support | 
 | 59 |     test_support.run_doctest(difflib) | 
| Tim Peters | f5f6c43 | 2001-05-23 07:46:36 +0000 | [diff] [blame] | 60 |  | 
 | 61 | If the test is successful, nothing is written to stdout (so you should not | 
 | 62 | create a corresponding output/test_difflib file), but running regrtest | 
| Tim Peters | a0a6222 | 2001-09-09 06:12:01 +0000 | [diff] [blame] | 63 | with -v will give a detailed report, the same as if passing -v to doctest. | 
 | 64 |  | 
 | 65 | A second argument can be passed to run_doctest to tell doctest to search | 
 | 66 | sys.argv for -v instead of using test_support's idea of verbosity.  This | 
 | 67 | is useful for writing doctest-based tests that aren't simply running a | 
 | 68 | doctest'ed Lib module, but contain the doctests themselves.  Then at | 
 | 69 | times you may want to run such a test directly as a doctest, independent | 
 | 70 | of the regrtest framework.  The tail end of test_descrtut.py is a good | 
 | 71 | example: | 
 | 72 |  | 
 | 73 |     def test_main(verbose=None): | 
 | 74 |         import test_support, test.test_descrtut | 
 | 75 |         test_support.run_doctest(test.test_descrtut, verbose) | 
 | 76 |  | 
 | 77 |     if __name__ == "__main__": | 
 | 78 |         test_main(1) | 
 | 79 |  | 
 | 80 | If run via regrtest, test_main() is called (by regrtest) without specifying | 
| Tim Peters | bea3fb8 | 2001-09-10 01:39:21 +0000 | [diff] [blame] | 81 | verbose, and then test_support's idea of verbosity is used.  But when | 
| Tim Peters | a0a6222 | 2001-09-09 06:12:01 +0000 | [diff] [blame] | 82 | run directly, test_main(1) is called, and then doctest's idea of verbosity | 
 | 83 | is used. | 
| Fred Drake | a6daad2 | 2001-05-23 04:57:49 +0000 | [diff] [blame] | 84 |  | 
 | 85 | See the documentation for the doctest module for information on | 
 | 86 | writing tests using the doctest framework. | 
 | 87 |  | 
| Tim Peters | f5f6c43 | 2001-05-23 07:46:36 +0000 | [diff] [blame] | 88 |  | 
| Fred Drake | a6daad2 | 2001-05-23 04:57:49 +0000 | [diff] [blame] | 89 | "traditional" Python test modules | 
 | 90 |  | 
 | 91 | The mechanics of how the "traditional" test system operates are fairly | 
 | 92 | straightforward.  When a test case is run, the output is compared with the | 
 | 93 | expected output that is stored in .../Lib/test/output.  If the test runs to | 
 | 94 | completion and the actual and expected outputs match, the test succeeds, if | 
 | 95 | not, it fails.  If an ImportError or test_support.TestSkipped error is | 
 | 96 | raised, the test is not run. | 
 | 97 |  | 
| Skip Montanaro | e9e5dcd | 2000-07-19 17:19:49 +0000 | [diff] [blame] | 98 |  | 
 | 99 | Executing Test Cases | 
 | 100 |  | 
 | 101 | If you are writing test cases for module spam, you need to create a file | 
| Tim Peters | f5f6c43 | 2001-05-23 07:46:36 +0000 | [diff] [blame] | 102 | in .../Lib/test named test_spam.py.  In addition, if the tests are expected | 
 | 103 | to write to stdout during a successful run, you also need to create an | 
 | 104 | expected output file in .../Lib/test/output named test_spam ("..." | 
 | 105 | represents the top-level directory in the Python source tree, the directory | 
 | 106 | containing the configure script).  If needed, generate the initial version | 
 | 107 | of the test output file by executing: | 
| Skip Montanaro | e9e5dcd | 2000-07-19 17:19:49 +0000 | [diff] [blame] | 108 |  | 
 | 109 |     ./python Lib/test/regrtest.py -g test_spam.py | 
 | 110 |  | 
| Tim Peters | f5f6c43 | 2001-05-23 07:46:36 +0000 | [diff] [blame] | 111 | from the top-level directory. | 
| Fred Drake | a6daad2 | 2001-05-23 04:57:49 +0000 | [diff] [blame] | 112 |  | 
| Skip Montanaro | e9e5dcd | 2000-07-19 17:19:49 +0000 | [diff] [blame] | 113 | Any time you modify test_spam.py you need to generate a new expected | 
| Skip Montanaro | 47c60ec | 2000-06-30 06:08:35 +0000 | [diff] [blame] | 114 | output file.  Don't forget to desk check the generated output to make sure | 
| Tim Peters | f5f6c43 | 2001-05-23 07:46:36 +0000 | [diff] [blame] | 115 | it's really what you expected to find!  All in all it's usually better | 
 | 116 | not to have an expected-out file (note that doctest- and unittest-based | 
 | 117 | tests do not). | 
 | 118 |  | 
 | 119 | To run a single test after modifying a module, simply run regrtest.py | 
 | 120 | without the -g flag: | 
| Skip Montanaro | 47c60ec | 2000-06-30 06:08:35 +0000 | [diff] [blame] | 121 |  | 
| Skip Montanaro | e9e5dcd | 2000-07-19 17:19:49 +0000 | [diff] [blame] | 122 |     ./python Lib/test/regrtest.py test_spam.py | 
 | 123 |  | 
 | 124 | While debugging a regression test, you can of course execute it | 
 | 125 | independently of the regression testing framework and see what it prints: | 
 | 126 |  | 
 | 127 |     ./python Lib/test/test_spam.py | 
| Skip Montanaro | 47c60ec | 2000-06-30 06:08:35 +0000 | [diff] [blame] | 128 |  | 
| Tim Peters | f5f6c43 | 2001-05-23 07:46:36 +0000 | [diff] [blame] | 129 | To run the entire test suite: | 
 | 130 |  | 
 | 131 | [UNIX, + other platforms where "make" works] Make the "test" target at the | 
 | 132 | top level: | 
| Skip Montanaro | 47c60ec | 2000-06-30 06:08:35 +0000 | [diff] [blame] | 133 |  | 
| Skip Montanaro | 47c60ec | 2000-06-30 06:08:35 +0000 | [diff] [blame] | 134 |     make test | 
 | 135 |  | 
| Tim Peters | f5f6c43 | 2001-05-23 07:46:36 +0000 | [diff] [blame] | 136 | {WINDOWS] Run rt.bat from your PCBuild directory.  Read the comments at | 
 | 137 | the top of rt.bat for the use of special -d, -O and -q options processed | 
 | 138 | by rt.bat. | 
 | 139 |  | 
 | 140 | [OTHER] You can simply execute the two runs of regrtest (optimized and | 
 | 141 | non-optimized) directly: | 
| Skip Montanaro | e9e5dcd | 2000-07-19 17:19:49 +0000 | [diff] [blame] | 142 |  | 
 | 143 |     ./python Lib/test/regrtest.py | 
 | 144 |     ./python -O Lib/test/regrtest.py | 
 | 145 |  | 
| Tim Peters | f5f6c43 | 2001-05-23 07:46:36 +0000 | [diff] [blame] | 146 | But note that this way picks up whatever .pyc and .pyo files happen to be | 
 | 147 | around.  The makefile and rt.bat ways run the tests twice, the first time | 
 | 148 | removing all .pyc and .pyo files from the subtree rooted at Lib/. | 
| Skip Montanaro | e9e5dcd | 2000-07-19 17:19:49 +0000 | [diff] [blame] | 149 |  | 
 | 150 | Test cases generate output based upon values computed by the test code. | 
 | 151 | When executed, regrtest.py compares the actual output generated by executing | 
 | 152 | the test case with the expected output and reports success or failure.  It | 
 | 153 | stands to reason that if the actual and expected outputs are to match, they | 
 | 154 | must not contain any machine dependencies.  This means your test cases | 
 | 155 | should not print out absolute machine addresses (e.g. the return value of | 
 | 156 | the id() builtin function) or floating point numbers with large numbers of | 
 | 157 | significant digits (unless you understand what you are doing!). | 
 | 158 |  | 
 | 159 |  | 
 | 160 | Test Case Writing Tips | 
| Skip Montanaro | 47c60ec | 2000-06-30 06:08:35 +0000 | [diff] [blame] | 161 |  | 
 | 162 | Writing good test cases is a skilled task and is too complex to discuss in | 
 | 163 | detail in this short document.  Many books have been written on the subject. | 
 | 164 | I'll show my age by suggesting that Glenford Myers' "The Art of Software | 
 | 165 | Testing", published in 1979, is still the best introduction to the subject | 
 | 166 | available.  It is short (177 pages), easy to read, and discusses the major | 
 | 167 | elements of software testing, though its publication predates the | 
 | 168 | object-oriented software revolution, so doesn't cover that subject at all. | 
 | 169 | Unfortunately, it is very expensive (about $100 new).  If you can borrow it | 
 | 170 | or find it used (around $20), I strongly urge you to pick up a copy. | 
 | 171 |  | 
| Skip Montanaro | 47c60ec | 2000-06-30 06:08:35 +0000 | [diff] [blame] | 172 | The most important goal when writing test cases is to break things.  A test | 
| Skip Montanaro | e9e5dcd | 2000-07-19 17:19:49 +0000 | [diff] [blame] | 173 | case that doesn't uncover a bug is much less valuable than one that does. | 
 | 174 | In designing test cases you should pay attention to the following: | 
| Skip Montanaro | 47c60ec | 2000-06-30 06:08:35 +0000 | [diff] [blame] | 175 |  | 
| Skip Montanaro | e9e5dcd | 2000-07-19 17:19:49 +0000 | [diff] [blame] | 176 |     * Your test cases should exercise all the functions and objects defined | 
 | 177 |       in the module, not just the ones meant to be called by users of your | 
 | 178 |       module.  This may require you to write test code that uses the module | 
 | 179 |       in ways you don't expect (explicitly calling internal functions, for | 
 | 180 |       example - see test_atexit.py). | 
| Skip Montanaro | 47c60ec | 2000-06-30 06:08:35 +0000 | [diff] [blame] | 181 |  | 
| Skip Montanaro | e9e5dcd | 2000-07-19 17:19:49 +0000 | [diff] [blame] | 182 |     * You should consider any boundary values that may tickle exceptional | 
 | 183 |       conditions (e.g. if you were writing regression tests for division, | 
 | 184 |       you might well want to generate tests with numerators and denominators | 
 | 185 |       at the limits of floating point and integer numbers on the machine | 
 | 186 |       performing the tests as well as a denominator of zero). | 
| Skip Montanaro | 47c60ec | 2000-06-30 06:08:35 +0000 | [diff] [blame] | 187 |  | 
| Skip Montanaro | e9e5dcd | 2000-07-19 17:19:49 +0000 | [diff] [blame] | 188 |     * You should exercise as many paths through the code as possible.  This | 
 | 189 |       may not always be possible, but is a goal to strive for.  In | 
 | 190 |       particular, when considering if statements (or their equivalent), you | 
 | 191 |       want to create test cases that exercise both the true and false | 
 | 192 |       branches.  For loops, you should create test cases that exercise the | 
 | 193 |       loop zero, one and multiple times. | 
 | 194 |  | 
 | 195 |     * You should test with obviously invalid input.  If you know that a | 
 | 196 |       function requires an integer input, try calling it with other types of | 
 | 197 |       objects to see how it responds. | 
 | 198 |  | 
 | 199 |     * You should test with obviously out-of-range input.  If the domain of a | 
 | 200 |       function is only defined for positive integers, try calling it with a | 
 | 201 |       negative integer. | 
 | 202 |  | 
 | 203 |     * If you are going to fix a bug that wasn't uncovered by an existing | 
 | 204 |       test, try to write a test case that exposes the bug (preferably before | 
 | 205 |       fixing it). | 
 | 206 |  | 
| Fred Drake | 44b6bd2 | 2000-10-23 16:37:14 +0000 | [diff] [blame] | 207 |     * If you need to create a temporary file, you can use the filename in | 
 | 208 |       test_support.TESTFN to do so.  It is important to remove the file | 
 | 209 |       when done; other tests should be able to use the name without cleaning | 
 | 210 |       up after your test. | 
 | 211 |  | 
| Skip Montanaro | e9e5dcd | 2000-07-19 17:19:49 +0000 | [diff] [blame] | 212 |  | 
 | 213 | Regression Test Writing Rules | 
 | 214 |  | 
 | 215 | Each test case is different.  There is no "standard" form for a Python | 
| Tim Peters | f5f6c43 | 2001-05-23 07:46:36 +0000 | [diff] [blame] | 216 | regression test case, though there are some general rules (note that | 
 | 217 | these mostly apply only to the "classic" tests; unittest- and doctest- | 
 | 218 | based tests should follow the conventions natural to those frameworks): | 
| Skip Montanaro | e9e5dcd | 2000-07-19 17:19:49 +0000 | [diff] [blame] | 219 |  | 
 | 220 |     * If your test case detects a failure, raise TestFailed (found in | 
 | 221 |       test_support). | 
 | 222 |  | 
 | 223 |     * Import everything you'll need as early as possible. | 
 | 224 |  | 
 | 225 |     * If you'll be importing objects from a module that is at least | 
 | 226 |       partially platform-dependent, only import those objects you need for | 
 | 227 |       the current test case to avoid spurious ImportError exceptions that | 
 | 228 |       prevent the test from running to completion. | 
 | 229 |  | 
 | 230 |     * Print all your test case results using the print statement.  For | 
 | 231 |       non-fatal errors, print an error message (or omit a successful | 
 | 232 |       completion print) to indicate the failure, but proceed instead of | 
 | 233 |       raising TestFailed. | 
 | 234 |  | 
| Tim Peters | a48b526 | 2000-08-23 05:28:45 +0000 | [diff] [blame] | 235 |     * Use "assert" sparingly, if at all.  It's usually better to just print | 
 | 236 |       what you got, and rely on regrtest's got-vs-expected comparison to | 
 | 237 |       catch deviations from what you expect.  assert statements aren't | 
 | 238 |       executed at all when regrtest is run in -O mode; and, because they | 
 | 239 |       cause the test to stop immediately, can lead to a long & tedious | 
 | 240 |       test-fix, test-fix, test-fix, ... cycle when things are badly broken | 
 | 241 |       (and note that "badly broken" often includes running the test suite | 
 | 242 |       for the first time on new platforms or under new implementations of | 
 | 243 |       the language). | 
 | 244 |  | 
| Skip Montanaro | e9e5dcd | 2000-07-19 17:19:49 +0000 | [diff] [blame] | 245 |  | 
 | 246 | Miscellaneous | 
 | 247 |  | 
 | 248 | There is a test_support module you can import from your test case.  It | 
 | 249 | provides the following useful objects: | 
 | 250 |  | 
 | 251 |     * TestFailed - raise this exception when your regression test detects a | 
 | 252 |       failure. | 
 | 253 |  | 
| Fred Drake | 62c53dd | 2000-08-21 16:55:57 +0000 | [diff] [blame] | 254 |     * TestSkipped - raise this if the test could not be run because the | 
 | 255 |       platform doesn't offer all the required facilities (like large | 
 | 256 |       file support), even if all the required modules are available. | 
 | 257 |  | 
| Skip Montanaro | e9e5dcd | 2000-07-19 17:19:49 +0000 | [diff] [blame] | 258 |     * verbose - you can use this variable to control print output.  Many | 
 | 259 |       modules use it.  Search for "verbose" in the test_*.py files to see | 
 | 260 |       lots of examples. | 
 | 261 |  | 
| Tim Peters | f5f6c43 | 2001-05-23 07:46:36 +0000 | [diff] [blame] | 262 |     * verify(condition, reason='test failed').  Use this instead of | 
 | 263 |  | 
 | 264 |           assert condition[, reason] | 
 | 265 |  | 
 | 266 |       verify() has two advantages over assert:  it works even in -O mode, | 
 | 267 |       and it raises TestFailed on failure instead of AssertionError. | 
 | 268 |  | 
 | 269 |     * TESTFN - a string that should always be used as the filename when you | 
 | 270 |       need to create a temp file.  Also use try/finally to ensure that your | 
 | 271 |       temp files are deleted before your test completes.  Note that you | 
 | 272 |       cannot unlink an open file on all operating systems, so also be sure | 
 | 273 |       to close temp files before trying to unlink them. | 
 | 274 |  | 
 | 275 |     * sortdict(dict) - acts like repr(dict.items()), but sorts the items | 
 | 276 |       first.  This is important when printing a dict value, because the | 
 | 277 |       order of items produced by dict.items() is not defined by the | 
 | 278 |       language. | 
 | 279 |  | 
 | 280 |     * findfile(file) - you can call this function to locate a file somewhere | 
 | 281 |       along sys.path or in the Lib/test tree - see test_linuxaudiodev.py for | 
 | 282 |       an example of its use. | 
 | 283 |  | 
| Tim Peters | a48b526 | 2000-08-23 05:28:45 +0000 | [diff] [blame] | 284 |     * use_large_resources - true iff tests requiring large time or space | 
 | 285 |       should be run. | 
 | 286 |  | 
| Skip Montanaro | e9e5dcd | 2000-07-19 17:19:49 +0000 | [diff] [blame] | 287 |     * fcmp(x,y) - you can call this function to compare two floating point | 
 | 288 |       numbers when you expect them to only be approximately equal withing a | 
 | 289 |       fuzz factor (test_support.FUZZ, which defaults to 1e-6). | 
 | 290 |  | 
| Tim Peters | a48b526 | 2000-08-23 05:28:45 +0000 | [diff] [blame] | 291 | NOTE:  Always import something from test_support like so: | 
 | 292 |  | 
 | 293 |     from test_support import verbose | 
 | 294 |  | 
 | 295 | or like so: | 
 | 296 |  | 
 | 297 |     import test_support | 
 | 298 |     ... use test_support.verbose in the code ... | 
 | 299 |  | 
 | 300 | Never import anything from test_support like this: | 
 | 301 |  | 
 | 302 |     from test.test_support import verbose | 
 | 303 |  | 
 | 304 | "test" is a package already, so can refer to modules it contains without | 
 | 305 | "test." qualification.  If you do an explicit "test.xxx" qualification, that | 
 | 306 | can fool Python into believing test.xxx is a module distinct from the xxx | 
 | 307 | in the current package, and you can end up importing two distinct copies of | 
 | 308 | xxx.  This is especially bad if xxx=test_support, as regrtest.py can (and | 
 | 309 | routinely does) overwrite its "verbose" and "use_large_resources" | 
 | 310 | attributes:  if you get a second copy of test_support loaded, it may not | 
 | 311 | have the same values for those as regrtest intended. | 
 | 312 |  | 
 | 313 |  | 
| Skip Montanaro | e9e5dcd | 2000-07-19 17:19:49 +0000 | [diff] [blame] | 314 | Python and C statement coverage results are currently available at | 
 | 315 |  | 
 | 316 |     http://www.musi-cal.com/~skip/python/Python/dist/src/ | 
 | 317 |  | 
 | 318 | As of this writing (July, 2000) these results are being generated nightly. | 
 | 319 | You can refer to the summaries and the test coverage output files to see | 
 | 320 | where coverage is adequate or lacking and write test cases to beef up the | 
 | 321 | coverage. | 
| Tim Peters | f5f6c43 | 2001-05-23 07:46:36 +0000 | [diff] [blame] | 322 |  | 
 | 323 |  | 
 | 324 | Some Non-Obvious regrtest Features | 
 | 325 |  | 
 | 326 |     * Automagic test detection:  When you create a new test file | 
 | 327 |       test_spam.py, you do not need to modify regrtest (or anything else) | 
 | 328 |       to advertise its existence.  regrtest searches for and runs all | 
 | 329 |       modules in the test directory with names of the form test_xxx.py. | 
 | 330 |  | 
 | 331 |     * Miranda output:  If, when running test_spam.py, regrtest does not | 
 | 332 |       find an expected-output file test/output/test_spam, regrtest | 
 | 333 |       pretends that it did find one, containing the single line | 
 | 334 |  | 
 | 335 |       test_spam | 
 | 336 |  | 
 | 337 |       This allows new tests that don't expect to print anything to stdout | 
 | 338 |       to not bother creating expected-output files. | 
 | 339 |  | 
 | 340 |     * Two-stage testing:  To run test_spam.py, regrtest imports test_spam | 
 | 341 |       as a module.  Most tests run to completion as a side-effect of | 
 | 342 |       getting imported.  After importing test_spam, regrtest also executes | 
 | 343 |       test_spam.test_main(), if test_spam has a "test_main" attribute. | 
 | 344 |       This is rarely needed, and you shouldn't create a module global | 
 | 345 |       with name test_main unless you're specifically exploiting this | 
 | 346 |       gimmick.  In such cases, please put a comment saying so near your | 
 | 347 |       def test_main, because this feature is so rarely used it's not | 
 | 348 |       obvious when reading the test code. |