Ticking The Box – Effective Module Testing
In the world of software development one of the topics of contention is Module Testing. Some value the approach whilst others see it as worthless. These diametrical opposed views even pervade the world of Safety Critical software development. In these latter organisations, standards such as IEC61508, EN50128 and Def Stan 00-55 usually require (even mandate) the use of module testing. In this context, module testing is just one link in a chain of tools to achieve safe software. However, not everybody is convinced of its benefits and it’s not uncommon for the process of module testing to be applied with a retrospective “ticking the box” approach.
Zircon Software undertakes a range of software services from Requirements, design, implementation and Testing. We are regularly engaged by our clients to carry out module testing both at the time of development and at the latter stages of the programme. So what is module testing?
In recent years the terminology has taken a change with many using the terms Unit Testing, Component Testing or Module Testing interchangeably. But fundamentally it covers the testing of software at its smallest practical chunk, be that a Class, an individual function or a C Module (although definitions do vary). Categories of testing fall into two camps: Black Box testing in which test cases are derived from the specification (or interface) of the module, and White Box testing where test cases are formed from knowledge of the inner workings of the module under test.
Black Box testing typically has the objective to ensure the software module does what the specification defines whilst White Box testing tends to prove no other features are buried inside the piece of code.
Test cases are defined using a multitude of techniques including Boundary Value analysis (bugs like to congregate around the boundary points of data/logic), Equivalence Classes (number_of_engines is not the same as INT32), and Error Guessing (engineers always forget to call the initialisation routine first).
The problem with module testing is that it is an extremely labour intensive and time consuming activity. Often the work is off-shored to low-cost labour countries and the effort to manage the off-shore process and the inevitable wash-dry-repeat nature of the development programme makes managers reluctant to start the process early. Citing reasons of efficiency and the human nature reaction to avoid doing things repeatedly means that managers will delay module testing activities until they are sure ‘nothing will change’.
Referring back for a moment to some of the safety software standards above, it is apparent that they all require a layered approach to testing. Starting with the lowest level Module testing, followed by Integration testing, System testing and finally Acceptance testing. Of course, each layer is aimed at finding different types of bugs, with Module Testing looking for logical and basic functionality, Integration testing focusing on the interaction of modules with each other and System Testing examining performance and timing type issues. This approach can be thought of as a ‘sieving’ process, removing the crudest bugs first and then steadily eradicating more subtle incarnations as it progresses. What this means in practical terms is that the process aims to stamp out the simplest logical, functional and interaction types of bugs BEFORE the system is plugged together.
Why not just test for these bugs at the system level? Well quite simply it’s a matter of time and cost. Remember the ‘Cost of Bugs curve’? This model demonstrates that the cost to find and fix bugs rises by increasing factors as the stage of development progresses. At the module testing stage it’s a small factor (and at the design review stage it’s even smaller!) whilst at the Release stage it may be a factor of ten. The simple fact is that testing on a system rig is an incredibly slow and difficult activity. Often it is very difficult to control the inputs to the system to the minute levels required to isolate a problem. Real time systems have enormous amounts of activity happening simultaneously or in fractions of milliseconds. Furthermore, the ability to ‘see’ into the software, to know what’s going on in this module or that module often, doesn’t exist. This system testing (well actually it’s really debugging) goes into an endless cycle of find, fix and release. Managers begin to tear their hair out as the schedule slips further and further to the right. Costs spiral and confidence in the system evaporates. This back end of the project soon becomes a black hole, sucking in all hands in a desperate attempt to rein in the schedule.
Back in the late nineties and early noughties, Extreme Programming began to take off as a method to improve productivity through improving quality. One element within this was the concept of Test Driven Development. In this approach, the developer writes test cases first and then writes code to ‘pass the test’. The simple idea was to end up with a piece of code that actually did ‘what it said on the tin’. The result of this method meant that, when modules were brought together into an overall system, fewer issues were found during testing. The upside of this led to a much better adherence to project schedules. When we step back and think about this for a moment it’s rather obvious. Less time spent hunting for bugs at the system stage directly equates to a quicker testing phase. The ‘back end’ of the project becomes more predictable as a consequence. Overall the duration of the project is reduced.
Let’s focus on that last statement one more time: If bugs are found earlier the project will cost less because it will be shorter (and more predictable).
So rather than delay module testing it really needs to be performed BEFORE the module is integrated, ideally at the moment the developer has started to write it. Now, with safety based systems often an independent person must perform this testing process. However, there is no reason why this tester cannot begin working with the developer immediately with the sole aim to find and remove bugs at the earliest opportunity and certainly before any code is integrated.
Therefore, delaying module testing or conducting a ‘tick-the-box’ process is actually a false economy.