Skip Montanaro | e9e5dcd | 2000-07-19 17:19:49 +0000 | [diff] [blame] | 1 | Writing Python Regression Tests |
| 2 | ------------------------------- |
Skip Montanaro | 47c60ec | 2000-06-30 06:08:35 +0000 | [diff] [blame] | 3 | Skip Montanaro |
Skip Montanaro | e9e5dcd | 2000-07-19 17:19:49 +0000 | [diff] [blame] | 4 | (skip@mojam.com) |
| 5 | |
| 6 | |
| 7 | Introduction |
Skip Montanaro | 47c60ec | 2000-06-30 06:08:35 +0000 | [diff] [blame] | 8 | |
| 9 | If you add a new module to Python or modify the functionality of an existing |
Skip Montanaro | e9e5dcd | 2000-07-19 17:19:49 +0000 | [diff] [blame] | 10 | module, you should write one or more test cases to exercise that new |
| 11 | functionality. The mechanics of how the test system operates are fairly |
| 12 | straightforward. When a test case is run, the output is compared with the |
| 13 | expected output that is stored in .../Lib/test/output. If the test runs to |
| 14 | completion and the actual and expected outputs match, the test succeeds, if |
Thomas Wouters | b9fa0a8 | 2000-08-04 13:34:43 +0000 | [diff] [blame] | 15 | not, it fails. If an ImportError or test_support.TestSkipped error is |
| 16 | raised, the test is not run. |
Skip Montanaro | 47c60ec | 2000-06-30 06:08:35 +0000 | [diff] [blame] | 17 | |
Skip Montanaro | e9e5dcd | 2000-07-19 17:19:49 +0000 | [diff] [blame] | 18 | You will be writing unit tests (isolated tests of functions and objects |
| 19 | defined by the module) using white box techniques. Unlike black box |
| 20 | testing, where you only have the external interfaces to guide your test case |
| 21 | writing, in white box testing you can see the code being tested and tailor |
| 22 | your test cases to exercise it more completely. In particular, you will be |
| 23 | able to refer to the C and Python code in the CVS repository when writing |
| 24 | your regression test cases. |
Skip Montanaro | 47c60ec | 2000-06-30 06:08:35 +0000 | [diff] [blame] | 25 | |
Skip Montanaro | e9e5dcd | 2000-07-19 17:19:49 +0000 | [diff] [blame] | 26 | |
| 27 | Executing Test Cases |
| 28 | |
| 29 | If you are writing test cases for module spam, you need to create a file |
| 30 | in .../Lib/test named test_spam.py and an expected output file in |
| 31 | .../Lib/test/output named test_spam ("..." represents the top-level |
| 32 | directory in the Python source tree, the directory containing the configure |
| 33 | script). From the top-level directory, generate the initial version of the |
| 34 | test output file by executing: |
| 35 | |
| 36 | ./python Lib/test/regrtest.py -g test_spam.py |
| 37 | |
| 38 | Any time you modify test_spam.py you need to generate a new expected |
Skip Montanaro | 47c60ec | 2000-06-30 06:08:35 +0000 | [diff] [blame] | 39 | output file. Don't forget to desk check the generated output to make sure |
| 40 | it's really what you expected to find! To run a single test after modifying |
| 41 | a module, simply run regrtest.py without the -g flag: |
| 42 | |
Skip Montanaro | e9e5dcd | 2000-07-19 17:19:49 +0000 | [diff] [blame] | 43 | ./python Lib/test/regrtest.py test_spam.py |
| 44 | |
| 45 | While debugging a regression test, you can of course execute it |
| 46 | independently of the regression testing framework and see what it prints: |
| 47 | |
| 48 | ./python Lib/test/test_spam.py |
Skip Montanaro | 47c60ec | 2000-06-30 06:08:35 +0000 | [diff] [blame] | 49 | |
| 50 | To run the entire test suite, make the "test" target at the top level: |
| 51 | |
Skip Montanaro | 47c60ec | 2000-06-30 06:08:35 +0000 | [diff] [blame] | 52 | make test |
| 53 | |
Skip Montanaro | e9e5dcd | 2000-07-19 17:19:49 +0000 | [diff] [blame] | 54 | On non-Unix platforms where make may not be available, you can simply |
| 55 | execute the two runs of regrtest (optimized and non-optimized) directly: |
| 56 | |
| 57 | ./python Lib/test/regrtest.py |
| 58 | ./python -O Lib/test/regrtest.py |
| 59 | |
| 60 | |
| 61 | Test cases generate output based upon values computed by the test code. |
| 62 | When executed, regrtest.py compares the actual output generated by executing |
| 63 | the test case with the expected output and reports success or failure. It |
| 64 | stands to reason that if the actual and expected outputs are to match, they |
| 65 | must not contain any machine dependencies. This means your test cases |
| 66 | should not print out absolute machine addresses (e.g. the return value of |
| 67 | the id() builtin function) or floating point numbers with large numbers of |
| 68 | significant digits (unless you understand what you are doing!). |
| 69 | |
| 70 | |
| 71 | Test Case Writing Tips |
Skip Montanaro | 47c60ec | 2000-06-30 06:08:35 +0000 | [diff] [blame] | 72 | |
| 73 | Writing good test cases is a skilled task and is too complex to discuss in |
| 74 | detail in this short document. Many books have been written on the subject. |
| 75 | I'll show my age by suggesting that Glenford Myers' "The Art of Software |
| 76 | Testing", published in 1979, is still the best introduction to the subject |
| 77 | available. It is short (177 pages), easy to read, and discusses the major |
| 78 | elements of software testing, though its publication predates the |
| 79 | object-oriented software revolution, so doesn't cover that subject at all. |
| 80 | Unfortunately, it is very expensive (about $100 new). If you can borrow it |
| 81 | or find it used (around $20), I strongly urge you to pick up a copy. |
| 82 | |
Skip Montanaro | 47c60ec | 2000-06-30 06:08:35 +0000 | [diff] [blame] | 83 | The most important goal when writing test cases is to break things. A test |
Skip Montanaro | e9e5dcd | 2000-07-19 17:19:49 +0000 | [diff] [blame] | 84 | case that doesn't uncover a bug is much less valuable than one that does. |
| 85 | In designing test cases you should pay attention to the following: |
Skip Montanaro | 47c60ec | 2000-06-30 06:08:35 +0000 | [diff] [blame] | 86 | |
Skip Montanaro | e9e5dcd | 2000-07-19 17:19:49 +0000 | [diff] [blame] | 87 | * Your test cases should exercise all the functions and objects defined |
| 88 | in the module, not just the ones meant to be called by users of your |
| 89 | module. This may require you to write test code that uses the module |
| 90 | in ways you don't expect (explicitly calling internal functions, for |
| 91 | example - see test_atexit.py). |
Skip Montanaro | 47c60ec | 2000-06-30 06:08:35 +0000 | [diff] [blame] | 92 | |
Skip Montanaro | e9e5dcd | 2000-07-19 17:19:49 +0000 | [diff] [blame] | 93 | * You should consider any boundary values that may tickle exceptional |
| 94 | conditions (e.g. if you were writing regression tests for division, |
| 95 | you might well want to generate tests with numerators and denominators |
| 96 | at the limits of floating point and integer numbers on the machine |
| 97 | performing the tests as well as a denominator of zero). |
Skip Montanaro | 47c60ec | 2000-06-30 06:08:35 +0000 | [diff] [blame] | 98 | |
Skip Montanaro | e9e5dcd | 2000-07-19 17:19:49 +0000 | [diff] [blame] | 99 | * You should exercise as many paths through the code as possible. This |
| 100 | may not always be possible, but is a goal to strive for. In |
| 101 | particular, when considering if statements (or their equivalent), you |
| 102 | want to create test cases that exercise both the true and false |
| 103 | branches. For loops, you should create test cases that exercise the |
| 104 | loop zero, one and multiple times. |
| 105 | |
| 106 | * You should test with obviously invalid input. If you know that a |
| 107 | function requires an integer input, try calling it with other types of |
| 108 | objects to see how it responds. |
| 109 | |
| 110 | * You should test with obviously out-of-range input. If the domain of a |
| 111 | function is only defined for positive integers, try calling it with a |
| 112 | negative integer. |
| 113 | |
| 114 | * If you are going to fix a bug that wasn't uncovered by an existing |
| 115 | test, try to write a test case that exposes the bug (preferably before |
| 116 | fixing it). |
| 117 | |
Fred Drake | 44b6bd2 | 2000-10-23 16:37:14 +0000 | [diff] [blame] | 118 | * If you need to create a temporary file, you can use the filename in |
| 119 | test_support.TESTFN to do so. It is important to remove the file |
| 120 | when done; other tests should be able to use the name without cleaning |
| 121 | up after your test. |
| 122 | |
Skip Montanaro | e9e5dcd | 2000-07-19 17:19:49 +0000 | [diff] [blame] | 123 | |
| 124 | Regression Test Writing Rules |
| 125 | |
| 126 | Each test case is different. There is no "standard" form for a Python |
| 127 | regression test case, though there are some general rules: |
| 128 | |
| 129 | * If your test case detects a failure, raise TestFailed (found in |
| 130 | test_support). |
| 131 | |
| 132 | * Import everything you'll need as early as possible. |
| 133 | |
| 134 | * If you'll be importing objects from a module that is at least |
| 135 | partially platform-dependent, only import those objects you need for |
| 136 | the current test case to avoid spurious ImportError exceptions that |
| 137 | prevent the test from running to completion. |
| 138 | |
| 139 | * Print all your test case results using the print statement. For |
| 140 | non-fatal errors, print an error message (or omit a successful |
| 141 | completion print) to indicate the failure, but proceed instead of |
| 142 | raising TestFailed. |
| 143 | |
Tim Peters | a48b526 | 2000-08-23 05:28:45 +0000 | [diff] [blame] | 144 | * Use "assert" sparingly, if at all. It's usually better to just print |
| 145 | what you got, and rely on regrtest's got-vs-expected comparison to |
| 146 | catch deviations from what you expect. assert statements aren't |
| 147 | executed at all when regrtest is run in -O mode; and, because they |
| 148 | cause the test to stop immediately, can lead to a long & tedious |
| 149 | test-fix, test-fix, test-fix, ... cycle when things are badly broken |
| 150 | (and note that "badly broken" often includes running the test suite |
| 151 | for the first time on new platforms or under new implementations of |
| 152 | the language). |
| 153 | |
Skip Montanaro | e9e5dcd | 2000-07-19 17:19:49 +0000 | [diff] [blame] | 154 | |
| 155 | Miscellaneous |
| 156 | |
| 157 | There is a test_support module you can import from your test case. It |
| 158 | provides the following useful objects: |
| 159 | |
| 160 | * TestFailed - raise this exception when your regression test detects a |
| 161 | failure. |
| 162 | |
Fred Drake | 62c53dd | 2000-08-21 16:55:57 +0000 | [diff] [blame] | 163 | * TestSkipped - raise this if the test could not be run because the |
| 164 | platform doesn't offer all the required facilities (like large |
| 165 | file support), even if all the required modules are available. |
| 166 | |
Skip Montanaro | e9e5dcd | 2000-07-19 17:19:49 +0000 | [diff] [blame] | 167 | * findfile(file) - you can call this function to locate a file somewhere |
| 168 | along sys.path or in the Lib/test tree - see test_linuxaudiodev.py for |
| 169 | an example of its use. |
| 170 | |
| 171 | * verbose - you can use this variable to control print output. Many |
| 172 | modules use it. Search for "verbose" in the test_*.py files to see |
| 173 | lots of examples. |
| 174 | |
Tim Peters | a48b526 | 2000-08-23 05:28:45 +0000 | [diff] [blame] | 175 | * use_large_resources - true iff tests requiring large time or space |
| 176 | should be run. |
| 177 | |
Skip Montanaro | e9e5dcd | 2000-07-19 17:19:49 +0000 | [diff] [blame] | 178 | * fcmp(x,y) - you can call this function to compare two floating point |
| 179 | numbers when you expect them to only be approximately equal withing a |
| 180 | fuzz factor (test_support.FUZZ, which defaults to 1e-6). |
| 181 | |
Tim Peters | a48b526 | 2000-08-23 05:28:45 +0000 | [diff] [blame] | 182 | NOTE: Always import something from test_support like so: |
| 183 | |
| 184 | from test_support import verbose |
| 185 | |
| 186 | or like so: |
| 187 | |
| 188 | import test_support |
| 189 | ... use test_support.verbose in the code ... |
| 190 | |
| 191 | Never import anything from test_support like this: |
| 192 | |
| 193 | from test.test_support import verbose |
| 194 | |
| 195 | "test" is a package already, so can refer to modules it contains without |
| 196 | "test." qualification. If you do an explicit "test.xxx" qualification, that |
| 197 | can fool Python into believing test.xxx is a module distinct from the xxx |
| 198 | in the current package, and you can end up importing two distinct copies of |
| 199 | xxx. This is especially bad if xxx=test_support, as regrtest.py can (and |
| 200 | routinely does) overwrite its "verbose" and "use_large_resources" |
| 201 | attributes: if you get a second copy of test_support loaded, it may not |
| 202 | have the same values for those as regrtest intended. |
| 203 | |
| 204 | |
Skip Montanaro | e9e5dcd | 2000-07-19 17:19:49 +0000 | [diff] [blame] | 205 | Python and C statement coverage results are currently available at |
| 206 | |
| 207 | http://www.musi-cal.com/~skip/python/Python/dist/src/ |
| 208 | |
| 209 | As of this writing (July, 2000) these results are being generated nightly. |
| 210 | You can refer to the summaries and the test coverage output files to see |
| 211 | where coverage is adequate or lacking and write test cases to beef up the |
| 212 | coverage. |