In XP the process is to test today every line of code written today. Here is a New Zealand experience.

The test “harnesses” should support “regressive” testing. This means they should write the results out to some sort of file and compare the results from this run to the previous run repeating all previous tests logging if a difference occurs (except for new tests). There are at least three different forms of these harnesses corresponding to different structural forms: COM objects that can be tested from simple files; middle tier database objects which will update the database requiring it to be refreshed between tests (or automatically highlighting changes made); GUI stuff requiring the capture of keystrokes and screen “scraping” and “pasting”. The output should be readable by a nominal user.

So the following is a typical pattern: The pair design the object FACE. The pair design the pre- and post-assertions raising exceptions (in a standard form) for failures. Insert temporary blocks of clearly identifiable dummy code which will return apparently valid results for each method. The object is then discussed with the greater team or a sub-set who may need to use this object soon. It is published in this form. The nominal user is brought in and invited to provide test data (and answers). The pair may split at this point, one writing the code, the other the harness. Real code displaces “dummy” code. The pair may come together at any time when they need to discuss how the next case will be achieved. Soon whenever a new case can be tested it is tested. Each test run repeats all prior tests. Any code not seen by the tester of the pair is “code inspected”. Inspection also ensures standards are being met. Soon the pair develop a pattern of when to work together and when to separate to code/test and “inspect”. At any time but definitely when the object is complete a coverage test is run using readily available tools. It should show that all code has been executed (except possibly code logging environmental failure). 100% coverage testing is a necessary but not sufficient condition for defect-free code. A performance test is run at the same time because it is convenient to do so and it occasionally picks up bad code. The nominal user is invited to check and re-run the tests. The object and the test harness are published to the team and QA. QA are expected to immediately re-run the tests to validate they have the current versions of all components including any invoked by the new code. QA may need to re-run the tests in different operating environments (versions of OS). Your comments are invited.

Leave a Reply