Guido van Rossum | 54a069f | 2001-05-23 13:24:30 +0000 | [diff] [blame] | 1 | Writing Python Regression Tests |
| 2 | ------------------------------- |
| 3 | Skip Montanaro |
| 4 | (skip@mojam.com) |
Skip Montanaro | e9e5dcd | 2000-07-19 17:19:49 +0000 | [diff] [blame] | 5 | |
| 6 | |
| 7 | Introduction |
Skip Montanaro | 47c60ec | 2000-06-30 06:08:35 +0000 | [diff] [blame] | 8 | |
| 9 | If you add a new module to Python or modify the functionality of an existing |
Skip Montanaro | e9e5dcd | 2000-07-19 17:19:49 +0000 | [diff] [blame] | 10 | module, you should write one or more test cases to exercise that new |
Fred Drake | a6daad2 | 2001-05-23 04:57:49 +0000 | [diff] [blame] | 11 | functionality. There are different ways to do this within the regression |
| 12 | testing facility provided with Python; any particular test should use only |
| 13 | one of these options. Each option requires writing a test module using the |
| 14 | conventions of the the selected option: |
Skip Montanaro | 47c60ec | 2000-06-30 06:08:35 +0000 | [diff] [blame] | 15 | |
Fred Drake | a6daad2 | 2001-05-23 04:57:49 +0000 | [diff] [blame] | 16 | - PyUnit based tests |
| 17 | - doctest based tests |
| 18 | - "traditional" Python test modules |
| 19 | |
| 20 | Regardless of the mechanics of the testing approach you choose, |
| 21 | you will be writing unit tests (isolated tests of functions and objects |
Skip Montanaro | e9e5dcd | 2000-07-19 17:19:49 +0000 | [diff] [blame] | 22 | defined by the module) using white box techniques. Unlike black box |
| 23 | testing, where you only have the external interfaces to guide your test case |
| 24 | writing, in white box testing you can see the code being tested and tailor |
| 25 | your test cases to exercise it more completely. In particular, you will be |
| 26 | able to refer to the C and Python code in the CVS repository when writing |
| 27 | your regression test cases. |
Skip Montanaro | 47c60ec | 2000-06-30 06:08:35 +0000 | [diff] [blame] | 28 | |
Tim Peters | f5f6c43 | 2001-05-23 07:46:36 +0000 | [diff] [blame] | 29 | |
Fred Drake | a6daad2 | 2001-05-23 04:57:49 +0000 | [diff] [blame] | 30 | PyUnit based tests |
| 31 | |
| 32 | The PyUnit framework is based on the ideas of unit testing as espoused |
| 33 | by Kent Beck and the Extreme Programming (XP) movement. The specific |
| 34 | interface provided by the framework is tightly based on the JUnit |
| 35 | Java implementation of Beck's original SmallTalk test framework. Please |
| 36 | see the documentation of the unittest module for detailed information on |
| 37 | the interface and general guidelines on writing PyUnit based tests. |
| 38 | |
Fred Drake | b2ad1c8 | 2001-09-28 20:05:25 +0000 | [diff] [blame] | 39 | The test_support helper module provides a two functions for use by |
Fred Drake | a6daad2 | 2001-05-23 04:57:49 +0000 | [diff] [blame] | 40 | PyUnit based tests in the Python regression testing framework: |
| 41 | run_unittest() takes a unittest.TestCase derived class as a parameter |
Fred Drake | b2ad1c8 | 2001-09-28 20:05:25 +0000 | [diff] [blame] | 42 | and runs the tests defined in that class, and run_suite() takes a |
Barry Warsaw | 5ca5374 | 2002-04-23 21:39:00 +0000 | [diff] [blame] | 43 | populated TestSuite instance and runs the tests. run_suite() is |
| 44 | preferred because unittest files typically grow multiple test classes, |
| 45 | and you might as well be prepared. |
| 46 | |
| 47 | All test methods in the Python regression framework have names that |
| 48 | start with "test_" and use lower-case names with words separated with |
| 49 | underscores. |
Fred Drake | b2ad1c8 | 2001-09-28 20:05:25 +0000 | [diff] [blame] | 50 | |
| 51 | All PyUnit-based tests in the Python test suite use boilerplate that |
| 52 | looks like this: |
| 53 | |
| 54 | import unittest |
Barry Warsaw | 04f357c | 2002-07-23 19:04:11 +0000 | [diff] [blame] | 55 | from test import test_support |
Fred Drake | b2ad1c8 | 2001-09-28 20:05:25 +0000 | [diff] [blame] | 56 | |
Barry Warsaw | 5ca5374 | 2002-04-23 21:39:00 +0000 | [diff] [blame] | 57 | class MyTestCase1(unittest.TestCase): |
Fred Drake | b2ad1c8 | 2001-09-28 20:05:25 +0000 | [diff] [blame] | 58 | # define test methods here... |
| 59 | |
Barry Warsaw | 5ca5374 | 2002-04-23 21:39:00 +0000 | [diff] [blame] | 60 | class MyTestCase2(unittest.TestCase): |
| 61 | # define more test methods here... |
| 62 | |
Fred Drake | b2ad1c8 | 2001-09-28 20:05:25 +0000 | [diff] [blame] | 63 | def test_main(): |
Barry Warsaw | 5ca5374 | 2002-04-23 21:39:00 +0000 | [diff] [blame] | 64 | suite = unittest.TestSuite() |
| 65 | suite.addTest(unittest.makeSuite(MyTestCase1)) |
| 66 | suite.addTest(unittest.makeSuite(MyTestCase2)) |
| 67 | test_support.run_suite(suite) |
Fred Drake | b2ad1c8 | 2001-09-28 20:05:25 +0000 | [diff] [blame] | 68 | |
| 69 | if __name__ == "__main__": |
| 70 | test_main() |
| 71 | |
| 72 | This has the advantage that it allows the unittest module to be used |
| 73 | as a script to run individual tests as well as working well with the |
| 74 | regrtest framework. |
Fred Drake | a6daad2 | 2001-05-23 04:57:49 +0000 | [diff] [blame] | 75 | |
Tim Peters | f5f6c43 | 2001-05-23 07:46:36 +0000 | [diff] [blame] | 76 | |
Fred Drake | a6daad2 | 2001-05-23 04:57:49 +0000 | [diff] [blame] | 77 | doctest based tests |
| 78 | |
| 79 | Tests written to use doctest are actually part of the docstrings for |
| 80 | the module being tested. Each test is written as a display of an |
| 81 | interactive session, including the Python prompts, statements that would |
| 82 | be typed by the user, and the output of those statements (including |
Tim Peters | f5f6c43 | 2001-05-23 07:46:36 +0000 | [diff] [blame] | 83 | tracebacks, although only the exception msg needs to be retained then). |
| 84 | The module in the test package is simply a wrapper that causes doctest |
| 85 | to run over the tests in the module. The test for the difflib module |
| 86 | provides a convenient example: |
Fred Drake | a6daad2 | 2001-05-23 04:57:49 +0000 | [diff] [blame] | 87 | |
Barry Warsaw | 1bfab7b | 2002-07-23 19:13:45 +0000 | [diff] [blame] | 88 | import difflib |
| 89 | from test import test_support |
Tim Peters | a0a6222 | 2001-09-09 06:12:01 +0000 | [diff] [blame] | 90 | test_support.run_doctest(difflib) |
Tim Peters | f5f6c43 | 2001-05-23 07:46:36 +0000 | [diff] [blame] | 91 | |
| 92 | If the test is successful, nothing is written to stdout (so you should not |
| 93 | create a corresponding output/test_difflib file), but running regrtest |
Tim Peters | a0a6222 | 2001-09-09 06:12:01 +0000 | [diff] [blame] | 94 | with -v will give a detailed report, the same as if passing -v to doctest. |
| 95 | |
| 96 | A second argument can be passed to run_doctest to tell doctest to search |
| 97 | sys.argv for -v instead of using test_support's idea of verbosity. This |
| 98 | is useful for writing doctest-based tests that aren't simply running a |
| 99 | doctest'ed Lib module, but contain the doctests themselves. Then at |
| 100 | times you may want to run such a test directly as a doctest, independent |
| 101 | of the regrtest framework. The tail end of test_descrtut.py is a good |
| 102 | example: |
| 103 | |
| 104 | def test_main(verbose=None): |
Barry Warsaw | 1bfab7b | 2002-07-23 19:13:45 +0000 | [diff] [blame] | 105 | from test import test_support, test_descrtut |
| 106 | test_support.run_doctest(test_descrtut, verbose) |
Tim Peters | a0a6222 | 2001-09-09 06:12:01 +0000 | [diff] [blame] | 107 | |
| 108 | if __name__ == "__main__": |
| 109 | test_main(1) |
| 110 | |
| 111 | If run via regrtest, test_main() is called (by regrtest) without specifying |
Tim Peters | bea3fb8 | 2001-09-10 01:39:21 +0000 | [diff] [blame] | 112 | verbose, and then test_support's idea of verbosity is used. But when |
Tim Peters | a0a6222 | 2001-09-09 06:12:01 +0000 | [diff] [blame] | 113 | run directly, test_main(1) is called, and then doctest's idea of verbosity |
| 114 | is used. |
Fred Drake | a6daad2 | 2001-05-23 04:57:49 +0000 | [diff] [blame] | 115 | |
| 116 | See the documentation for the doctest module for information on |
| 117 | writing tests using the doctest framework. |
| 118 | |
Tim Peters | f5f6c43 | 2001-05-23 07:46:36 +0000 | [diff] [blame] | 119 | |
Fred Drake | a6daad2 | 2001-05-23 04:57:49 +0000 | [diff] [blame] | 120 | "traditional" Python test modules |
| 121 | |
| 122 | The mechanics of how the "traditional" test system operates are fairly |
| 123 | straightforward. When a test case is run, the output is compared with the |
| 124 | expected output that is stored in .../Lib/test/output. If the test runs to |
| 125 | completion and the actual and expected outputs match, the test succeeds, if |
| 126 | not, it fails. If an ImportError or test_support.TestSkipped error is |
| 127 | raised, the test is not run. |
| 128 | |
Skip Montanaro | e9e5dcd | 2000-07-19 17:19:49 +0000 | [diff] [blame] | 129 | |
| 130 | Executing Test Cases |
| 131 | |
| 132 | If you are writing test cases for module spam, you need to create a file |
Tim Peters | f5f6c43 | 2001-05-23 07:46:36 +0000 | [diff] [blame] | 133 | in .../Lib/test named test_spam.py. In addition, if the tests are expected |
| 134 | to write to stdout during a successful run, you also need to create an |
| 135 | expected output file in .../Lib/test/output named test_spam ("..." |
| 136 | represents the top-level directory in the Python source tree, the directory |
| 137 | containing the configure script). If needed, generate the initial version |
| 138 | of the test output file by executing: |
Skip Montanaro | e9e5dcd | 2000-07-19 17:19:49 +0000 | [diff] [blame] | 139 | |
| 140 | ./python Lib/test/regrtest.py -g test_spam.py |
| 141 | |
Tim Peters | f5f6c43 | 2001-05-23 07:46:36 +0000 | [diff] [blame] | 142 | from the top-level directory. |
Fred Drake | a6daad2 | 2001-05-23 04:57:49 +0000 | [diff] [blame] | 143 | |
Skip Montanaro | e9e5dcd | 2000-07-19 17:19:49 +0000 | [diff] [blame] | 144 | Any time you modify test_spam.py you need to generate a new expected |
Skip Montanaro | 47c60ec | 2000-06-30 06:08:35 +0000 | [diff] [blame] | 145 | output file. Don't forget to desk check the generated output to make sure |
Tim Peters | f5f6c43 | 2001-05-23 07:46:36 +0000 | [diff] [blame] | 146 | it's really what you expected to find! All in all it's usually better |
| 147 | not to have an expected-out file (note that doctest- and unittest-based |
| 148 | tests do not). |
| 149 | |
| 150 | To run a single test after modifying a module, simply run regrtest.py |
| 151 | without the -g flag: |
Skip Montanaro | 47c60ec | 2000-06-30 06:08:35 +0000 | [diff] [blame] | 152 | |
Skip Montanaro | e9e5dcd | 2000-07-19 17:19:49 +0000 | [diff] [blame] | 153 | ./python Lib/test/regrtest.py test_spam.py |
| 154 | |
| 155 | While debugging a regression test, you can of course execute it |
| 156 | independently of the regression testing framework and see what it prints: |
| 157 | |
| 158 | ./python Lib/test/test_spam.py |
Skip Montanaro | 47c60ec | 2000-06-30 06:08:35 +0000 | [diff] [blame] | 159 | |
Tim Peters | f5f6c43 | 2001-05-23 07:46:36 +0000 | [diff] [blame] | 160 | To run the entire test suite: |
| 161 | |
| 162 | [UNIX, + other platforms where "make" works] Make the "test" target at the |
| 163 | top level: |
Skip Montanaro | 47c60ec | 2000-06-30 06:08:35 +0000 | [diff] [blame] | 164 | |
Skip Montanaro | 47c60ec | 2000-06-30 06:08:35 +0000 | [diff] [blame] | 165 | make test |
| 166 | |
Barry Warsaw | 5ca5374 | 2002-04-23 21:39:00 +0000 | [diff] [blame] | 167 | [WINDOWS] Run rt.bat from your PCBuild directory. Read the comments at |
Tim Peters | f5f6c43 | 2001-05-23 07:46:36 +0000 | [diff] [blame] | 168 | the top of rt.bat for the use of special -d, -O and -q options processed |
| 169 | by rt.bat. |
| 170 | |
| 171 | [OTHER] You can simply execute the two runs of regrtest (optimized and |
| 172 | non-optimized) directly: |
Skip Montanaro | e9e5dcd | 2000-07-19 17:19:49 +0000 | [diff] [blame] | 173 | |
| 174 | ./python Lib/test/regrtest.py |
| 175 | ./python -O Lib/test/regrtest.py |
| 176 | |
Tim Peters | f5f6c43 | 2001-05-23 07:46:36 +0000 | [diff] [blame] | 177 | But note that this way picks up whatever .pyc and .pyo files happen to be |
| 178 | around. The makefile and rt.bat ways run the tests twice, the first time |
| 179 | removing all .pyc and .pyo files from the subtree rooted at Lib/. |
Skip Montanaro | e9e5dcd | 2000-07-19 17:19:49 +0000 | [diff] [blame] | 180 | |
| 181 | Test cases generate output based upon values computed by the test code. |
| 182 | When executed, regrtest.py compares the actual output generated by executing |
| 183 | the test case with the expected output and reports success or failure. It |
| 184 | stands to reason that if the actual and expected outputs are to match, they |
| 185 | must not contain any machine dependencies. This means your test cases |
| 186 | should not print out absolute machine addresses (e.g. the return value of |
| 187 | the id() builtin function) or floating point numbers with large numbers of |
| 188 | significant digits (unless you understand what you are doing!). |
| 189 | |
| 190 | |
| 191 | Test Case Writing Tips |
Skip Montanaro | 47c60ec | 2000-06-30 06:08:35 +0000 | [diff] [blame] | 192 | |
| 193 | Writing good test cases is a skilled task and is too complex to discuss in |
| 194 | detail in this short document. Many books have been written on the subject. |
| 195 | I'll show my age by suggesting that Glenford Myers' "The Art of Software |
| 196 | Testing", published in 1979, is still the best introduction to the subject |
| 197 | available. It is short (177 pages), easy to read, and discusses the major |
| 198 | elements of software testing, though its publication predates the |
| 199 | object-oriented software revolution, so doesn't cover that subject at all. |
| 200 | Unfortunately, it is very expensive (about $100 new). If you can borrow it |
| 201 | or find it used (around $20), I strongly urge you to pick up a copy. |
| 202 | |
Skip Montanaro | 47c60ec | 2000-06-30 06:08:35 +0000 | [diff] [blame] | 203 | The most important goal when writing test cases is to break things. A test |
Skip Montanaro | e9e5dcd | 2000-07-19 17:19:49 +0000 | [diff] [blame] | 204 | case that doesn't uncover a bug is much less valuable than one that does. |
| 205 | In designing test cases you should pay attention to the following: |
Skip Montanaro | 47c60ec | 2000-06-30 06:08:35 +0000 | [diff] [blame] | 206 | |
Skip Montanaro | e9e5dcd | 2000-07-19 17:19:49 +0000 | [diff] [blame] | 207 | * Your test cases should exercise all the functions and objects defined |
| 208 | in the module, not just the ones meant to be called by users of your |
| 209 | module. This may require you to write test code that uses the module |
| 210 | in ways you don't expect (explicitly calling internal functions, for |
| 211 | example - see test_atexit.py). |
Skip Montanaro | 47c60ec | 2000-06-30 06:08:35 +0000 | [diff] [blame] | 212 | |
Skip Montanaro | e9e5dcd | 2000-07-19 17:19:49 +0000 | [diff] [blame] | 213 | * You should consider any boundary values that may tickle exceptional |
| 214 | conditions (e.g. if you were writing regression tests for division, |
| 215 | you might well want to generate tests with numerators and denominators |
| 216 | at the limits of floating point and integer numbers on the machine |
| 217 | performing the tests as well as a denominator of zero). |
Skip Montanaro | 47c60ec | 2000-06-30 06:08:35 +0000 | [diff] [blame] | 218 | |
Skip Montanaro | e9e5dcd | 2000-07-19 17:19:49 +0000 | [diff] [blame] | 219 | * You should exercise as many paths through the code as possible. This |
| 220 | may not always be possible, but is a goal to strive for. In |
| 221 | particular, when considering if statements (or their equivalent), you |
| 222 | want to create test cases that exercise both the true and false |
| 223 | branches. For loops, you should create test cases that exercise the |
| 224 | loop zero, one and multiple times. |
| 225 | |
| 226 | * You should test with obviously invalid input. If you know that a |
| 227 | function requires an integer input, try calling it with other types of |
| 228 | objects to see how it responds. |
| 229 | |
| 230 | * You should test with obviously out-of-range input. If the domain of a |
| 231 | function is only defined for positive integers, try calling it with a |
| 232 | negative integer. |
| 233 | |
| 234 | * If you are going to fix a bug that wasn't uncovered by an existing |
| 235 | test, try to write a test case that exposes the bug (preferably before |
| 236 | fixing it). |
| 237 | |
Fred Drake | 44b6bd2 | 2000-10-23 16:37:14 +0000 | [diff] [blame] | 238 | * If you need to create a temporary file, you can use the filename in |
| 239 | test_support.TESTFN to do so. It is important to remove the file |
| 240 | when done; other tests should be able to use the name without cleaning |
| 241 | up after your test. |
| 242 | |
Skip Montanaro | e9e5dcd | 2000-07-19 17:19:49 +0000 | [diff] [blame] | 243 | |
| 244 | Regression Test Writing Rules |
| 245 | |
| 246 | Each test case is different. There is no "standard" form for a Python |
Tim Peters | f5f6c43 | 2001-05-23 07:46:36 +0000 | [diff] [blame] | 247 | regression test case, though there are some general rules (note that |
| 248 | these mostly apply only to the "classic" tests; unittest- and doctest- |
| 249 | based tests should follow the conventions natural to those frameworks): |
Skip Montanaro | e9e5dcd | 2000-07-19 17:19:49 +0000 | [diff] [blame] | 250 | |
| 251 | * If your test case detects a failure, raise TestFailed (found in |
Barry Warsaw | 04f357c | 2002-07-23 19:04:11 +0000 | [diff] [blame] | 252 | test.test_support). |
Skip Montanaro | e9e5dcd | 2000-07-19 17:19:49 +0000 | [diff] [blame] | 253 | |
| 254 | * Import everything you'll need as early as possible. |
| 255 | |
| 256 | * If you'll be importing objects from a module that is at least |
| 257 | partially platform-dependent, only import those objects you need for |
| 258 | the current test case to avoid spurious ImportError exceptions that |
| 259 | prevent the test from running to completion. |
| 260 | |
| 261 | * Print all your test case results using the print statement. For |
| 262 | non-fatal errors, print an error message (or omit a successful |
| 263 | completion print) to indicate the failure, but proceed instead of |
| 264 | raising TestFailed. |
| 265 | |
Tim Peters | a48b526 | 2000-08-23 05:28:45 +0000 | [diff] [blame] | 266 | * Use "assert" sparingly, if at all. It's usually better to just print |
| 267 | what you got, and rely on regrtest's got-vs-expected comparison to |
| 268 | catch deviations from what you expect. assert statements aren't |
| 269 | executed at all when regrtest is run in -O mode; and, because they |
| 270 | cause the test to stop immediately, can lead to a long & tedious |
| 271 | test-fix, test-fix, test-fix, ... cycle when things are badly broken |
| 272 | (and note that "badly broken" often includes running the test suite |
| 273 | for the first time on new platforms or under new implementations of |
| 274 | the language). |
| 275 | |
Skip Montanaro | e9e5dcd | 2000-07-19 17:19:49 +0000 | [diff] [blame] | 276 | |
| 277 | Miscellaneous |
| 278 | |
Barry Warsaw | 1bfab7b | 2002-07-23 19:13:45 +0000 | [diff] [blame] | 279 | There is a test_support module in the test package you can import for |
Barry Warsaw | 04f357c | 2002-07-23 19:04:11 +0000 | [diff] [blame] | 280 | your test case. Import this module using either |
| 281 | |
| 282 | import test.test_support |
| 283 | |
| 284 | or |
| 285 | |
| 286 | from test import test_support |
| 287 | |
| 288 | test_support provides the following useful objects: |
Skip Montanaro | e9e5dcd | 2000-07-19 17:19:49 +0000 | [diff] [blame] | 289 | |
| 290 | * TestFailed - raise this exception when your regression test detects a |
| 291 | failure. |
| 292 | |
Fred Drake | 62c53dd | 2000-08-21 16:55:57 +0000 | [diff] [blame] | 293 | * TestSkipped - raise this if the test could not be run because the |
| 294 | platform doesn't offer all the required facilities (like large |
| 295 | file support), even if all the required modules are available. |
| 296 | |
Skip Montanaro | e9e5dcd | 2000-07-19 17:19:49 +0000 | [diff] [blame] | 297 | * verbose - you can use this variable to control print output. Many |
| 298 | modules use it. Search for "verbose" in the test_*.py files to see |
| 299 | lots of examples. |
| 300 | |
Tim Peters | f5f6c43 | 2001-05-23 07:46:36 +0000 | [diff] [blame] | 301 | * verify(condition, reason='test failed'). Use this instead of |
| 302 | |
| 303 | assert condition[, reason] |
| 304 | |
| 305 | verify() has two advantages over assert: it works even in -O mode, |
| 306 | and it raises TestFailed on failure instead of AssertionError. |
| 307 | |
| 308 | * TESTFN - a string that should always be used as the filename when you |
| 309 | need to create a temp file. Also use try/finally to ensure that your |
| 310 | temp files are deleted before your test completes. Note that you |
| 311 | cannot unlink an open file on all operating systems, so also be sure |
| 312 | to close temp files before trying to unlink them. |
| 313 | |
| 314 | * sortdict(dict) - acts like repr(dict.items()), but sorts the items |
| 315 | first. This is important when printing a dict value, because the |
| 316 | order of items produced by dict.items() is not defined by the |
| 317 | language. |
| 318 | |
| 319 | * findfile(file) - you can call this function to locate a file somewhere |
| 320 | along sys.path or in the Lib/test tree - see test_linuxaudiodev.py for |
| 321 | an example of its use. |
| 322 | |
Tim Peters | a48b526 | 2000-08-23 05:28:45 +0000 | [diff] [blame] | 323 | * use_large_resources - true iff tests requiring large time or space |
| 324 | should be run. |
| 325 | |
Skip Montanaro | e9e5dcd | 2000-07-19 17:19:49 +0000 | [diff] [blame] | 326 | * fcmp(x,y) - you can call this function to compare two floating point |
| 327 | numbers when you expect them to only be approximately equal withing a |
| 328 | fuzz factor (test_support.FUZZ, which defaults to 1e-6). |
| 329 | |
Tim Peters | a48b526 | 2000-08-23 05:28:45 +0000 | [diff] [blame] | 330 | |
Skip Montanaro | e9e5dcd | 2000-07-19 17:19:49 +0000 | [diff] [blame] | 331 | Python and C statement coverage results are currently available at |
| 332 | |
| 333 | http://www.musi-cal.com/~skip/python/Python/dist/src/ |
| 334 | |
| 335 | As of this writing (July, 2000) these results are being generated nightly. |
| 336 | You can refer to the summaries and the test coverage output files to see |
| 337 | where coverage is adequate or lacking and write test cases to beef up the |
| 338 | coverage. |
Tim Peters | f5f6c43 | 2001-05-23 07:46:36 +0000 | [diff] [blame] | 339 | |
| 340 | |
| 341 | Some Non-Obvious regrtest Features |
| 342 | |
| 343 | * Automagic test detection: When you create a new test file |
| 344 | test_spam.py, you do not need to modify regrtest (or anything else) |
| 345 | to advertise its existence. regrtest searches for and runs all |
| 346 | modules in the test directory with names of the form test_xxx.py. |
| 347 | |
| 348 | * Miranda output: If, when running test_spam.py, regrtest does not |
| 349 | find an expected-output file test/output/test_spam, regrtest |
| 350 | pretends that it did find one, containing the single line |
| 351 | |
| 352 | test_spam |
| 353 | |
| 354 | This allows new tests that don't expect to print anything to stdout |
| 355 | to not bother creating expected-output files. |
| 356 | |
| 357 | * Two-stage testing: To run test_spam.py, regrtest imports test_spam |
| 358 | as a module. Most tests run to completion as a side-effect of |
| 359 | getting imported. After importing test_spam, regrtest also executes |
| 360 | test_spam.test_main(), if test_spam has a "test_main" attribute. |
Fred Drake | b2ad1c8 | 2001-09-28 20:05:25 +0000 | [diff] [blame] | 361 | This is rarely required with the "traditional" Python tests, and |
| 362 | you shouldn't create a module global with name test_main unless |
| 363 | you're specifically exploiting this gimmick. This usage does |
| 364 | prove useful with PyUnit-based tests as well, however; defining |
| 365 | a test_main() which is run by regrtest and a script-stub in the |
| 366 | test module ("if __name__ == '__main__': test_main()") allows |
| 367 | the test to be used like any other Python test and also work |
| 368 | with the unittest.py-as-a-script approach, allowing a developer |
| 369 | to run specific tests from the command line. |