High Quality software – how do we Consistently Deliver It?

We’ve been writing a series of posts describing the different phases that a software development project goes through from inception to completion.  At each stage we’ve outlined the ways in which we follow best practice to ensure that everything progresses efficiently and that the software satisfies all the client’s requirements.  One of the issues we’ve touched on is quality control – but we’ve tended to do so in passing.  In this post we focus on it exclusively and directly, explaining what we do to ensure high quality solutions.

What kind of quality do you want?

The first point to note is the one we find ourselves making at the start of all of these posts – that every project is different and that we have to adapt our approach accordingly.

With regards to the issue of quality we have to start by asking ourselves, and the client, what level of quality they want to attain.

If we create a solution that fails to meet their expectations, and is unfit for purpose, then this is obviously an issue.  But so is erring in the other direction – providing a level of reliability and functionality that is surplus to requirements, and conducting a string of tests that are unnecessary, is an expensive waste of time and money.  One can easily spend far more money in testing a system than it costs to write the software in the first place, so you want to be sure it is genuinely needed.

How is this decision made?

We start by looking at the purpose of the software – the function it performs will go a long way to answering the question of what quality standard is required.

All of our work is covered by our BSI certified ISO 9001 and TickITplus quality system. This requires us to develop a Quality Plan that defines, among other things, the processes to be applied to the project. The processes define the rigour that is necessary to achieve the required level of quality. This is dependent on the use of the system that we are developing.

For safety systems the decision is simple. International standards exist that define the processes required to meet the safety integrity required. IEC 61508 is a standard for the development of software for general electronic systems. This defines 5 SIL levels, 0 to 4, where 0 is no safety implication and 4 means that a single failure could result in a loss of life. The software development process is strictly defined for the higher SIL levels and a choice of optional methods is defined for the lower levels.

Different industries have specific standards that are similar to IEC 61508 and also define integrity levels. EN 50128 is the rail standard; ISO 60880 is the nuclear standard, DO-178B is the aviation standard. All of these define integrity levels and what is required in the software process to achieve them.

Where safety is not part of the equation, other factors define the quality requirements. A system gathering data may not have a safety requirement but may need to provide very near 100% system availability. In this case, even though safety is not an issue, the requirements for low level safety provide integrity of design, implementation and testing that help to provide the quality required by the availability target and may be adopted.

Quality management processes – how do we ensure we attain the requisite quality standards?

In general terms most of what we do comes under the umbrella headings of Verification and Validation (V&V).  There is much discussion and confusion in the industry about what these two terms cover, how they differ and how they overlap.

Verification, to our mind, is something you do at the end of each stage (the design stage, the implementation stage…).  The objective is to make sure you have done everything you set out to achieve at the beginning of that stage – it’s about ensuring that the product has been built according to the requirements and design specifications.

Software verification ensures that “you built it right”. In a safety related project, where the requirements are very detailed and you are following a classic waterfall approach, the verification process is formal.

In a project where the requirements are looser the verification process will be less formal.  What’s more, if you are taking a more agile approach to development, you’ll verify one set of requirements at a time.  You’ll then complete another cycle or sprint and test against the next set.

Software validation ensures that the product actually meets the user’s needs, and that the specifications were correct in the first place. Software validation ensures that “you built the right thing”.  Software validation confirms that the product, as provided, will fulfil its intended use.  Validation is more about testing what the products does, focusing on functionality and performance.

Verification in practice

When verifying documents we usually conduct a peer review.  For code we undertake a combination of static and dynamic testing.  Static testing involves code reviews and analysis, we use automated tools that do static analysis, checking the code against various coding standards that you set at the beginning

A manual code walk-through is also often required to ensure that the developed code is consistent with the design, and best practice such as a defensive programming approach has been taken.  If you’d like to know more about coding standards go to our recent post Implementation.  “Just do it” vs “Do it properly”.

Dynamic testing involves low level Module/Unit Testing, Integration and System Testing.  Low level testing may involve testing the interfaces of the module (black box testing) or detailed unit tests at a function level (white box testing).

The test cases for both of these methods are generated from the module/unit design specifications, using techniques such as boundary analysis and error guessing.  Integration Testing ensures that all of the modules in the system have been successfully connected together, and System Testing is used to test the robustness of the system, ensuring that it performs as required, under load, and handles invalid inputs etc.

To some extent these activities provide some level of validation, in that the requirements of the system are the starting point of the designs the tests have been generated against.

Another aspect of verification is configuration management.  This is the process of tracking and controlling changes in the software, and in the accompanying documentation, against a baseline so you can be sure what version of the system a particular client is operating.  If a client has a problem or wants to change something it’s much harder if you don’t have an accurate record of how their particular system is configured and what documentation they have.

We can also use a traceability matrix.  We give each individual requirement a unique identifier and we map those onto the matrix or table.  We then follow that all the way through the various development stages so when we get to the end we have a set of tests that show the original requirement has been met.  This is immensely helpful at the acceptance stage, helping to prove we’ve done everything that was asked of us.

What’s involved in validation?

Validation is basically testing.  You can test any number of things, but as we noted you’ll only tend to test those aspects that are relevant.  We might test that the system responds correctly to all kinds of inputs, performs its functions within an acceptable time, is sufficiently usable, can be installed and run in its intended environments and achieves the general result its stakeholders desire.

Validation should always involve, at a minimum, testing against the requirements of the system.  The objective is to ensure that the system does what it is required to do, the requirement specification is the definition of what the system should do, so this is used to create the Acceptance Test specification.  This includes both functional and non-functional (performance, safety, etc.) requirements.

Having good requirements helps to ensure the quality of this phase.  Acceptance testing may be carried out at the factory initially, maybe using simulators to simulate external systems, and then carried out on site, to test the system installed and ready for commissioning.  This refers to the classic waterfall approach but with the agile or iterative approaches you do something similar, but just keep repeating it as you complete each short cycle.

In some instances independent testing can be a good idea.  A third party may be asked to conduct the testing of our software, or clients sometimes our client may ask us to perform independent testing on a system we haven’t developed.

Finally, if you are following a quality process, someone needs to do an audit.  On every project we appoint a quality representative.  It’s their job to make sure we’re doing all the reviews and tests are being done, and done properly.  The client may also conduct an audit of our processes before we start, and may conduct their own audit at the end (or appoint an independent auditor).

The last word

So, we rigorously manage the quality of our projects, using these standards, processes and procedures.  The degree of stringency depends on what is appropriate for the nature of the job and the needs of the client.  But the most important thing is perhaps the most obvious – quality software is the result of employing quality people in the first place!  How do we do that – read our earlier post on human resources.