Test Driven Development: A Silver Bullet?
Test Driven Development (TDD) is a software development process that started around the early Noughties and is attributed to Kent Beck. The basic concept with TDD is that a test is written and performed first (obviously fails) before code is developed to pass the test. For each additional feature or line of code to be developed another test case is written first. Each time, all previous test cases are rerun. Thus at its simplest, the approach assures a correctly operating feature and enables a regression test for each additional line of code.
To make Test Driven Development viable automated test systems are mandatory. These must also be fast systems since the developer will constantly be rerunning the test suite. No line of code should exist that does not have a corresponding test. This means that, by its very nature, not only is the feature tested but also each “defensive programming” element such as Null pointer checks and boundary checks are naturally included. This method of development is described as highly iterative since a developer is rapidly “refactoring code” as each new line (feature) is created and test cases reworked to suit.
One of the benefits of TDD is that code quality rises significantly. Indeed in a study conducted at Microsoft*, projects running with a Test Driven Design Methodology found a 60-90% drop in defects. Defects, of course, are directly costly and time consuming, with each bug found at later stages costing significantly more than earlier in the lifecycle. This is not entirely unexpected since the code isn’t released for integration until it passes its test suite with each line of code having a corresponding test case.
This must of course come at a cost? Surely the effort required to build a test infrastructure around each code element must absorb the project timeframe? It’s easy to see this argument and even in the Microsoft study, managers subjectively claimed an increase of between 15% and 30% of project development time. However, one could argue that this method would still reduce overall time for development as the “back end” of the project becomes vastly more certain. In traditional projects it’s not uncommon to find the integration and system testing stages to be a never ending cycle of test, debug and fix. This never ending cycle becomes hard to plan for and thus code gets shipped complete with agreed outstanding bugs. With a much higher level of quality in TDD projects, the back end phases will be much shorter.
Another benefit of the TDD approach is that even when bugs are found later in the lifecycle, they can be fixed and the entire test suite rerun to ensure non regression of the software. A recent client demonstrated this fact clearly when, just days before a shipping date, a change was made to fix a bug. This particular change was extremely innocuous or so they thought, but this one bug stopped an unrelated aspect of the system from functioning which, fortunately, was detected moments before shipping and was rolled back. Had there been a test suite available, this simple change and its impact would have been detected instantly by the developer saving hours of engineer’s time in hunting for the culprit.
An additional benefit of TDD is that an evidential measure of progress can be established for the development programme. A feature doesn’t exist unless it works and it can’t be claimed to work unless a test case is written and executed for it. This is a gold mine for managers and planners alike and a positive step forward from the traditional claim from engineers that “it’s 90% done!” Features that are working are shippable thus if development time is running late decisions can be made about what can go in the release with certainty.
Thinking about how a particular test will be created also has the added benefits of improving the structure of developed code. Better interfaces, greater decoupling and lower complexity are the inevitable consequences of having to write tests first. The constant refactoring in TDD means that engineers are not frightened to make a change to lower complexity since everything will simply be retested anyhow.
However, it’s important to recognise the limitations of TDD. What TDD doesn’t do (at least not in the established way) is remove the likelihood of usability errors, timing/real-time issues and other performance based aspects of a software system. When a developer implements a test, their assumptions are rolled into that case thus if they think the user wants a red button then the test case will reflect that. Confident that the test case has passed, the developer is ignorant of the fact that the user actually wanted a blue button. This type of test cannot be eliminated just because the project is using TDD. Similarly with Integration Testing, just because the individual units have been tested fully (via TDD) doesn’t mean they will function when connected together. This is so particularly in developments involving more than one person where differing assumptions can be made. Therefore, Integration, System and Acceptance testing are still crucial phases in a successful software development lifecycle.
Another argument often thrown in by critics of TDD is that more effort is required in the early stages of the development. This can be cumbersome on prototype or proof-of-concept projects and indeed it is – if the idea is simply to see if something is possible! However, many organisations have started out with a prototype that has evolved into the production version. Frederick Brook’s idea of “build one to throw away” is frequently seen as step too far! But let’s be clear, a prototype developed without TDD will not have the kind of quality discussed above and ergo the production version that it becomes will equally be afflicted by a swarm of bugs. The effort to tame this beast will be uncontrolled and it’s probably just too late to introduce TDD at this late stage. Let’s not forget, that TDD is an investment over the lifetime of the software not just during its initial development time (although opponents of TDD will be quick to point out the maintenance cost of the test suites).
So, is TDD the silver bullet? Obviously not, but it is a useful tool in the armoury of the Software Team that can reduce defects and produce working software swiftly and let’s face it – there is something quite satisfying (as an engineer) in getting something working!