It never ceases to frustrate and disappoint me when I hear people talking of test cases as use-once, throwaway artefacts. Any team worth its salt will be building a library of tests and will see that library as an asset and something worth investing in.
Any system change needs to be tested from two perspectives:
- Has our changed functionality taken effect? (incremental testing)
- Have we broken any existing functionality? (regression testing)
The former tends to be the main focus, the latter is often overlooked (it is assumed that nothing got broke). Worse still, since today's change will be different to tomorrow's (or next week's), there's a tendency to throw away today's incremental test cases. Yet, today's incremental test cases are tomorrow's regression test cases.
At one extreme, such as when building software for passenger jet aircraft, we might adopt the following strategy:
- When introducing a system, write and execute test cases for all testable elements
- When we introduce a new function, we should write test cases for the new function, we should run those new test cases to make sure the new function works, and we should re-run all the previous test cases to make sure we didn't break anything (they should all work perfectly because nothing else changed, right?)
- When we update existing functionality, we should update the existing test cases for the updated function, we should run those updated test cases to make sure the updated function works, and we should re-run all the previous test cases to make sure we didn't break anything (again, they should all work perfectly because nothing else changed)
Now, if we're not building software for passenger jets, we need to take a more pragmatic, risk-based approach. Testing is not about creating guarantees, it's about establishing sufficient confidence in our software product. We only need to do sufficient amounts of testing to establish the desired degree of confidence. So there are two relatively subjective decisions to be made:
- How much confidence do we need?
- How many tests (and what type) do we need to establish the desired degree of confidence?
Wherever we draw the line of "sufficient confidence", our second decision ought to conclude that we need to run a mixture of incremental tests and regression tests. And, rather than writing fresh regression tests every time, we should be calling upon our library of past incremental tests and re-running them. And the bottom line here is that today's incremental tests are tomorrow's regression tests - they should work (unedited and without modification) because no other part of the system has changed.
Every one of our test cases is an investment, not an ephemeral object. If we're
investing in test cases and managing our
technical debt, then we are on the way to having a responsibly managed development team!