Agile and Critical Systems


Whilst researching this author’s first article, one fact came up time and again – Agile is not suitable for critical systems and is rarely used, if at all, in this area.  Countless experts made the point that the Waterfall methodology, with its emphasis on documentation, upfront planning and analysis and defined phases, is a much better option.  This made the author curious.  What is it about Agile that makes it a bad option?  Is there anything that can be done to improve its chances of being taken seriously as a methodology for critical systems?

What is a critical system?

Before starting to look into how Agile could (or could not) be used, it is prudent to define exactly what a ‘critical system’ is.  A critical system, also known as a ‘life-critical’ or ‘safety-critical’ system, is a system in which failure is likely to result in the loss of life or environmental damage.  Failure can be considered to include both catastrophic failure of the system or mere malfunctions.  Examples of critical systems include medical appliances, nuclear reactors, air traffic control systems and an airbag system in a car.  The fields that critical systems are employed in are wide, ranging from the examples given in medicine, energy and transport to spaceflight and recreation.

What is meant by ‘Agile’?

‘Agile’ is generally used as a catch-all term for a particular family of software development methodologies.  These methodologies all use the ‘Agile manifesto’ as a starting point but interpret and implement its philosophy in differing ways:

We are uncovering better ways of developing software by doing it and helping others do it.  Through this work we have come to value:

Individuals and interactions over processes and tools
Working software over comprehensive documentation
Customer collaboration over contract negotiation
Responding to change over following a plan

That is, while there is value in the items on the right, we value the items on the left more.

Agile methods include Extreme Programming (XP), Scrum and Crystal.  Whilst these methodologies all implement the principles of the Agile Manifesto differently, they all follow the same principles of teamwork, quality, communication and adaptation.

Current practice in the development of critical systems

As a result of the potential consequences of a failure in a critical system, there is a requirement for very high reliability.  This requirement and the need to be able to demonstrate a high level of confidence has led to regulatory standards becoming common within industries that utilise critical systems.  For example, the development of software within the aviation industry in Europe must adhere to the ED-12C standard from the European Organisation for Civil Aviation Equipment (EUROCAE) (referred to as DO-178C in the US).  IEC 62304 is a standard specified for the development of medical device software from the International Electrotechnical Commission (IEC).   However, whilst these standards define strict frameworks for the development process, they are not prescriptive.  The organisations or teams involved in developing critical systems are free to choose their own methodology, so long as the activities and tasks specified by the standards are implemented.

This need to meet standards has led to heavyweight methodologies, such as Waterfall, dominating software development in critical systems.  These processes, with their focus on upfront analysis and design and defined phases, are viewed as better suited to the process.  The standards also generally require that records are kept (to ensure traceability) and again, with their emphasis on documentation, heavyweight development processes are a natural choice.

In addition, the single, large implementation at the end of a heavyweight development process allows for safety certification to occur once on the completed critical system.

Why use Agile?

There is not an existing Agile methodology that could be safely used unaltered in the development of critical systems.  However, these is no reason why certain parts of the Agile methodology could not be selected and incorporated into working practices.

There are several of Agile’s principles or consequences that would be a natural fit in the development of critical systems.  Quality is the central factor in critical systems so Agile’s focus on improved quality would only be an asset.

Agile’s focus on developing and testing iteratively means that problems are identified much earlier than would be the case in a heavyweight methodology.  As a result, the risk of defects in the end system and their potential consequences is reduced.  It also removes the risk of human error in a long and complex testing phase at the end of the project.

Also, the Agile practice of continuous integration has benefits to critical systems development.  Integration occurs much earlier in the project lifecycle and with less sizeable and complex components.  This allows immediate feedback to the developer and rapid corrective action to be taken.    Continuous integration also reduces the possibility of reaching the end of a project to discover a fundamental issue with the software developed.

Any downsides?

Some of the Agile practices would have a detrimental impact on the development of safety critical systems and need to be discarded.

Firstly, documentation is a crucial element of critical systems development.  This means that Agile’s focus on minimal documentation is not a good fit.  The certification system demands that each decision and design is ‘traceable’.  This requires extensive documentation.  As the maintenance of critical systems is almost as important as their initial development, documentation is also an important tool when maintaining critical systems.

There is also the issue that the development of critical systems can take many years and it is likely that staff will move on and new staff will join the development team.  Again, documentation is crucial in dealing with this.

There remains a need to have a large part of the design completed upfront.  This is required in order to give fixed requirements so that certification and safety analysis can be carried out early in the project.  This would mean that a critical system project that utilised Agile methods would still require the dedicated upfront design period to produce architectural models and functional requirements.

Whilst iterations would be possible and add benefits, they would have to be changed slightly from traditional Agile iterations.  Each iteration would need to produce evidence that its output was fundamentally safe for certification purposes.  However, as the iterations in a critical systems project would not contain much, if any, design or analysis work, adding safety certification to the iteration’s acceptance criteria need not impact on their frequency.

Refactoring is another Agile practice that would not fit naturally with the development of critical systems.  If code is refactored on a critical system, it has the potential to invalidate previous certification or security analysis.  This would cause extensive rework and would need to be avoided, whenever possible.


Overall, despite the added complexity and demands of a critical system development, it appears that Agile methods could be adopted successfully by the industry.  If chosen carefully, the benefits would be tangible and, despite the concerns, actually increase the chance of developing a stable and useful critical system.  The real challenge is in choosing the practices to adopt and creating an environment within the organisation that ensures the successful integration of these practices.


Sommerville, I.  (2007).  Software Engineering.  Harlow, England: Addison Wesley

Ge, X., Paige, R. F., and McDermid, J. A. (2010). An iterative approach for development of safety-critical software and safety arguments. In Proceedings of the 2010 Agile Conference, AGILE ’10, pages 35–43, Washington, DC, USA. IEEE Computer Society.

Sidky, A. and Arthur, J. (2007). Determining the applicability of agile practices to mission and life-critical systems. In Proceedings of the 31st IEEE Software Engineering Workshop, SEW ’07, pages 3–12, Washington, DC, USA. IEEE Computer Society.

Heimdahl, M. P. E. (2007). Safety and software intensive systems: Challenges old and new. In 2007 Future of Software Engineering, FOSE ’07, pages 137–152, Washington, DC, USA. IEEE Computer Society.

Lindvall, M., Basili, V. R., Boehm, B. W., Costa, P., Dangle, K., Shull, F., Tesoriero, R., Williams, L. A., and Zelkowitz, M. V. (2002). Empirical findings in agile methods. In Proceedings of the Second XP Universe and First Agile Universe Conference on Extreme Programming and Agile Methods – XP/Agile Universe 2002, pages 197–207, London, UK, UK. Springer-Verlag.

Cawley, O., Wang, X., and Richardson, I. (2010). Lean/agile software development methodologies in regulated environments – state of the art. In Abrahamsson, P. and Oza, N. V., editors, LESS, volume 65 of Lecture Notes in Business Information Processing, pages 31–36. Springer.

Douglass, B.P., and Ekas, L. (2012). Adopting agile methods for safety-critical systems development.  IBM.

Turk, D., France, R., and Rumpe, B. (2002). Limitations of agile soft- ware processes. In Proceedings of the Third International Conference on Extreme Programming and Flexible Processes in Software Engineering (XP2002), pages 43–46. Springer-Verlag.

Response to “Why testing is not search of bugs?”


This post is a response to s1368322’s post “Why Testing Is Not Search Of Bugs“. The post in question discusses testers seemingly as a different entity from developers. As such, this response is written on that basis.

In their post, the author makes the case that the main purpose of testing is to miss as few bugs as possible. Whilst, I can agree with this general point, I would adjust that slightly to say that the main purpose of testing is to miss no bugs and also expand that to say that testing is also to ensure that developed software meets its requirements and provide a level of assurance to the client or user.

However, I believe that one of the premises that the post is built on, that there is a difference in testing approach depending on the size of the project, is wrong.

Small-scale vs. Large-scale – is there really a difference?

The author makes the case that the testing procedures and objectives are different between ‘large-scale’ and ‘small-scale’ projects.  This is an argument that I have been unable to agree with.  For any development with a formal testing phase, the same procedures should take place with the same ultimate objectives.  In both cases, the testing phase and subsequent test cases should be designed to cover all available functionality within the system.  The ultimate aim of any testing phase is to provide a development that is as defect-free and reliable as is possible.


The author makes the assertion, for small-scale projects, that the “vast majority of testers are convinced that testing” is a “search of bugs”. Again, I would argue that this is not necessarily the case. It may indeed be the case when discussing developers carrying out unit testings. However, I believe that if dedicated testers are used at the system/acceptance testing phase then most of these testers (or prospective users) are aware that their main role is to ensure that the software released meets its requirements and works as expected.

The author goes on to argue that testers like to find defects as it is a “visual representation of work done by them”. I would agree that there is a sense of satisfaction, on the part of the tester, by finding a defect, particularly an important one. However, if a development was to pass through testing without a single bug being found, most testers would not see this as a failure on their part or an indicator that they had not been working effectively. As long as no defects passed through the testing phase unidentified, I believe that the majority of testers would see this as a successful test phase. It is possible for test phases to be considered outstanding successes with very few defects identified. I would argue that the success of a testing phase is more accurately considered in terms of code and functionality coverage and the proportion of the test cases completed within the testing phase.

The author then goes on to suggest that the ‘least stable’ parts of a development would be tested first. Again, unfortunately, I disagree. When a development is passed to the testing phase, there should be no parts of the system considered less stable that others, particularly on smaller projects. It may be true that certain areas of the test cases may be prioritised but these would tend to be on importance of functionality or complexity of code rather than areas that are less stable (i.e., risk-based testing).


When the author begins to talk about the testing in large-scale projects, I agree with the points made more. In fact, some of the assertions reflect the arguments made above for small-scale developments reinforcing the belief that ‘testing is testing’ irrespective of how large or small a development is.

However, I disagree with the statement “Anyway, there can be a problem with existing functionality, and in most cases, this functionality is not tested properly”. Whilst it would be foolish to suggest that the existing software being integrated into will always be tested fully, in most projects that I have worked in or been aware of, when the need arose, there was a dedicated period of regression testing to ensure that no defects had been introduced to the pre-existing functionality.

The author also states that most of the testing will be completed on parameters that can be expected in normal use. This is a reasonable statement. However, the claim is made that testing non-standard scenarios before all basic features are completed can be inefficient and waste time. This can be true but only within the testing of components. As an example, if a financial software package was being tested, whilst non-standard inputs for a tax calculation may not be tested first, it would make sense to test them whilst testing that tax calculation rather than moving onto another component and coming back to it when everything else had been completed.

Changing the approach?

In the final section on how to change the approach, there is much to agree.  An effective test phase is dependent on well-specified and defined test cases which cover all possible aspects of the functionality and understand how the software will be used after release.  Documentation of test cases is also important for communication within the team, with the developers, management and as reassurance for the client.  It is also important to reflect on the test phase and learn from any mistakes or omissions.


As discussed, my greatest disagreement with this post is the assertion that there is a fundamental difference between testing large and small-scale projects and the testing objectives and procedures for small-scale projects.  The author is correct in how they propose that the testing approach needs to change but I would argue that the proposed practices are already in use in most competent test teams.

Does Waterfall deserve its bad press?

In recent years, there has been a tendency to equate the Waterfall methodology with ‘evil’ and the Agile variants as ‘good’.  Having worked on successful projects under Waterfall (and Agile for that matter) but also some Waterfall projects that failed spectacularly, I would argue that Waterfall still has its place in modern software development.  This post will make the case that disregarding Waterfall as an option for future projects is potentially setting more software projects up for failure than need be the case.

But wait, I’ve heard terrible things about Waterfall

There can be no disputing that Waterfall is not a sensible choice of methodology for all software projects.  It can be rigid, dogmatic and, in many cases, can lead to the development being delivered no longer being what is required.  Waterfall can make changing something already done extremely expensive and time-consuming.  It is effectively useless if the client or user is not clear on what exactly it is that they want.  Waterfall is also not suitable for projects where development needs to start quickly or where the intention is to release incremental improvements to get benefits soon.  However, for the right project Waterfall can provide a stable and sensible framework for the project’s management and execution.

So, what is it good for?

Some might say, “Absolutely nothing”.  However, they are mistaken.  Waterfall projects generally follow the same pattern of linear phases – Requirements definition, Design, Development, Testing and Implementation.  This linear and well-defined structure makes Waterfall a relatively easy methodology to implement.  The discrete phases also give several advantages for the right project.  From a project management point of view, the phases make it easier to provide more accurate estimation of timescales and budgets (although this is not fool-proof by any means).  With the bulk of the research and analysis completed in advance to define the requirements, it allows the project manager (and developer) to estimate the time required to develop and test each requirement more accurately.  This, in turn, can provide more reliable milestone and release dates.

The phases, and the output produced at the end of each phase, also provide natural points and evidence for reflection and progress measurement of the project.  These milestones allow the manager to compare the current situation to that planned and have potential slippage highlighted, and remedial action taken, as soon as possible.

The Waterfall method also allows a team to cope more effectively with resourcing issues such as a team member leaving the company or moving to another project.  This is a common risk for teams within large companies.  These companies tend to have several projects/clients at the same time and so resources can be regularly reallocated to other projects, depending on the current priorities.  As a result of Waterfall’s insistence on documenting each stage of the process and the project knowledge (e.g., Business Requirements Document, System Design Document), it is easier for a new developer (or analyst, architect or testing for that matter) to pick up work that has been left by a departing team member as the knowledge is retained within the team.  Within teams, Waterfall also means that it is easier to accommodate specialists or inexperience than some methodologies that demand that team members are knowledgable on all aspects of the development.  Under Waterfall, team members are generally responsible for one area, such as requirements, user interface or testing.

Finally, Waterfall is one of the most effective methodologies for distributed teams.  As a result of the documentation produced, the defined phases and individual responsibility for tasks, the necessity for continual communication within the team is reduced significantly.

The model also provides advantages to the client or user.  Firstly, the upfront analysis phases encourage projects to define a clear plan and vision prior to development.  This helps ensure that the focus is on the right areas and that the architecture is right before development starts. A further consequence of this focus on defining requirements and system design upfront is that it can help with the quality of the development because of the time spent analysing the development and any potential risks.

Waterfall can also be helpful to clients because they are not required to provide resources or staff throughout the project as in Agile.  Under Waterfall, before testing and implementation, the project team only needs access to clients/users during the early stages of the project and at the defined milestones.  This is relevant as many clients are reluctant to contract a project team to develop software, only to provide the team with the client’s own staff, thereby increasing the cost of the project to the client.  There is also the relative security of potentially more accurate budget and time estimates.  Many clients are reluctant to commit to (and assume the risk of) projects where accurate estimates cannot be given in advance and the risk of cost overrun is left open.

So, what type of projects are right for Waterfall?

Most obviously, projects where the requirements are stable and unlikely to change in any meaningful way are good candidates for Waterfall.  One example of this may be a development being carried out to ensure a company can remain compliant with new legislation.  In this case, the legislation and resulting requirements are likely to be set in stone.  The up-front analysis and design also makes Waterfall an option for projects that are large and complex.  There is a risk with some other methodologies that the ‘bigger picture’, particularly if complex, gets lost in amongst the focus on requirements that can be delivered in iterative phases.  Finally, Waterfall is also a natural option for projects to develop critical systems.  Critical systems often require each step of the project to be reviewed, analysed and/or approved by management or client experts.  Waterfall’s phases and corresponding output provide easy milestones for these actions to be carried out.

This post is not arguing that Waterfall is a panacea for all software projects.  It is evidently not.  As discussed, the model has numerous flaws and, for the wrong project, can prove disastrous.  However, Waterfall remains a relevant project methodology in today’s environment.  The current trend of dismissing it as an option for software projects may lead to more software project failures than would otherwise be the case if it was considered for the right projects.


Clark, A. (2014, February).  Software Development Methodologies.  SAPM.  Lecture conducted from University of Edinburgh.

Sommerville, I.  (2007).  Software Engineering.  Harlow, England: Addison Wesley

Mikoluk, K. (2013).  Agile vs Waterfall: Evaluating The Pros And Cons.  Retrieved 9 Feb 2014, from

Haughey, D. (2009).  Waterfall v Agile: How Should I Approach My Software Development Project?  Retrieved 9 Feb 2014 from

‘Melonfire’ (2006).  Understanding the Pros and Cons of the Waterfall Model of Software Development.  Retrieved 9 Feb 2014 from