Software failures: reasonable and even desirable!

Since the technology is continuously evolving and there is a great variety of software testing approaches that can be applied to different stages of the software development process, one would expect that failures related to software projects would have been limited and easily avoided. Numerous researches and statistics reveal that this fact is not true. Software project failures continue to occur. In some cases they do not seem to be significant, but in others they are quite serious and lead to the loss of huge amounts of money. Every year, there is a great list of companies that fail in the development of their products with reference to the investments that are lost in each case. All these examples have led to the creation of lists including the most common software failures, the most common reasons that cause their occurrence and finally tips and advice on how to reduce or even avoid the majority of them.

How is software failure defined and when is a software project considered as a failure?
A software failure occurs when a software system no longer complies with the specifications that were initially defined for it, which means that it does not present the expected behaviour and this situation can be externally observable. Bugs or faults in a software system tend to lead to errors (which occur within the bounds of a system and are therefore hard to observe) and then errors might cause failures. Faults, errors and failures follow a cyclic pattern in a software system. However, there are cases in which the error may be trapped and repaired by the system or it is of a particular type that does not give rise to a failure.

The definition of a software project as a failure varies and seems to be quite a subjective issue. Most of the times, it depends on the type of the project and the standards that the company producing it has set for it. A commercial project that exceeds the pre-estimated budget or does not meet the predefined deadline could be seen as a failure. On the other hand, an open-source project could be considered as a failure if it does not succeed in creating a community around it which takes care of its maintenance and evolution. Other reasons to define a project as a failure are related to the customers and their needs (e.g. the project does not satisfy the customers’ requirements), the team building the project (e.g. the members of the team fail to continue the work already done) and many other factors that will be discussed in a later section of this article.

Which are the most common causes of software project failure?
The issues that have been recorded so far as the reasons contributing to failures in software projects are various and can divided into two broad categories: technical and social. The technical issues are mostly related to the lack of up-to-date estimating techniques and to the fact that developers often fail to make a plan and encounter possible growth or changes in consumers’ requirements. On the other hand, social issues are associated with the attempt to adhere to a plan and some predefined deadlines regarding the construction of the software project resulting in lack of attention to detail and inaccurate results . [4]

A list with some of the most common software failure reasons is presented below [1], [2], [4]:

  • Absence or bad definition of system requirements: The existence of Software Requirement Specifications (SRS) is fundamental in the software development process. System specifications that are not defined in a thorough and precise way can cause misunderstandings and lead to bad implementation. The accuracy of SRS is very important and can save much time and eliminate problems that could possibly arise during the next steps of the development procedure regarding the addition of new features or changes in already existing ones.

  • Unrealistic expectations and project goals: There is a close relation among this factor and some others associated with the pressure caused by imminent deadlines and the skills of the software development team working on a project. In some cases, project managers fail to realise that it is not feasible to implement all the ideas regarding a project within the specified time limits, especially when some of them appear to be quite complex. Moreover, they often overestimate the abilities and skills of the members working on that project, who may be for instance young and inexperienced. The lack of a well organised plan based on the correlation among these issues can easily lead to project failure. It is also true that the unrealistic expectations can be a result of inexperience of project managers themselves.

  • Absence or bad documentation: There are different types of documentation that are required during the various phases of the software development process. Adequate and up-to-date documentation is crucial as it helps developers think about some issues related to the project before actually starting implementing it and reduces the possibility of a failure.

  • Poor communication among developers, customers and final users: There is no way of a software project being successful, if there is no constant and meaningful communication among these groups of people. Users’ needs and customers’ requirements and expectations change all the time and developers should always take into consideration this new information during the development process.

  • Inadequate resources or use of inappropriate ones: Software development teams often tend to use resources and tools that are obsolete and as a result not the most suitable for the development of modern software projects. This can often cause unexpected and undesirable results and in the worst scenario to failure of the whole project. Software manufactures should be responsible for keeping themselves informed about the technology changes and being ready to exploit their advantages in order to improve their project. In some other cases, a bad estimate about the required resources may be done. This fact makes the development of software at the expected and predefined level extremely difficult and sometimes infeasible.

  • Absence or bad risk management: Risks in software projects refer to uncertain states that could affect a project in an undesirable way, e.g. inadequate or badly written requirement specifications, use of unsuitable technology and tools, etc. Risk management is another crucial part of the software development process that should be precise, done from the beginning to the end and kept updated. In that way, many problems could be encountered and solved before becoming too serious and leading to failure.

Of course, a little bit of research and reading on very popular project failures that have been recorded during the last decades can reveal many more reasons that make software fail [5]. My intention in this article is just to give a brief list with some of the most commonly met.

Are there any precaution steps that could be followed to prevent software from failing?When things go wrong and people fail in achieving their goals, the lessons learnt during their attempt constitute the only positive and the most important part. Analysis of these lessons and feedback given either by small software development teams or larger companies leads to the creation of a list with some suggestions considered good enough to prevent failure and contribute to the success of a software project. Some of them are the following [3]:

  • Careful consideration of user input and feedback during all the stages of the software development process

  • Set of realistic goals and detailed plans and estimates about the cost and time that will be possibly required for the development of the software

  • Choice of the right team by comparing the skills and knowledge of its members with what is needed for the right implementation of the project

  • Constant update of documents related to requirement and risk management according to new users’ needs that may arise

  • Provision of the right communication tools so that the communication between developers and consumers is never lost and is preserved during the development procedure

These and many other actions can help reduce the rate of software failure and somehow ensure the success of a project.

Many software manufacturers have been asked about their products’ failures and whether they have regrets trying hard or spending much time and money on them. It is noticeable that the majority of them mentions that they have no regrets and that it is worth taking some risks as they have learnt a lot from their mistakes and they have gained much knowledge and experience that could use in their future plans [5].

After explaining all the above, one reasonable question would be: since the reasons that contribute to software failures and ways of preventing them are known, why do software manufacturers tend to not apply them and leave their projects fail?

The answer is quite complicated. Apparently, the goal of every software development team is to produce a successful product. But, it is not always easy to take into account all these factors that can lead to a failure. Sometimes there is knowledge, but there is lack of experience on how to apply it correctly. In other cases, there is knowledge and experience but the time pressure imposed by deadlines or the limits on the available budget lead to compromises on the quality of the software produced [1].

It seems quite reasonable for software failures to continue to occur at some level, but the majority of them could have been avoided using the knowledge that already exists. The real challenge and the occurrence of true failures that have never happened before and for which there are no known methodologies on how to avoid them. In this context failures are somehow “desirable”, because only this kind of situations can help software manufacturers improve and make technical and economic progress [2].

I suppose that since there is much knowledge on how to produce successful software, it is time to take advantage of it and apply it in practice [2]. And even if a failure occurs, we should be ready to work on it and try to find possible solutions. But when is the right time to stop trying fixing a failure and cancel a project? Should we consider only the relation between the time and cost spent to it with the value that it actually returns? Or are there any other factors that should be also taken into account? In my opinion, this is one of the most difficult decisions that need to be made when software fails.

[1] Cohen Shwartz Oren, “Why Software Projects Tend to Fail “, September 2007
[2] Robert N. Charette, “Why Software Fails“, September 2005
[3], “Why do Software Projects fail?
[4] Capers Jones, “Social and Technical Reasons for Software Project Failures“, June 2006
[5], “Lessons learned from 13 failed software products“, May 2010

To Unit Test or not to Unit Test? That is the question!

This is a response article to “Software testing: an overlooked process during software development”[8] by s1222207.

Compared to past years, one could say that there is an increase in the size of software projects as well as in their complexity. The reason for that is that the needs have changed, the technology has evolved, so the software created has to conform to these new conditions. Moreover, software projects seem to grow far beyond the hands of a single individual and the formulation of a team working on it appears to be more than necessary [1]. All these factors make the notion of testing and its introduction to the software development process more crucial than ever. There are various approaches to testing and different opinions on the most suitable time to implement it: before the implementation of the actual software, after, or even during the development process. The choice of the approach should be based on the nature of the software project and the needs that each time occur.

Analysis and general comments on the “Software testing: an overlooked process during software development” article

The author introduces the idea of testing and its importance by showing the bad results caused by its absence through a real-life example. The article gives the definition of software testing discriminating its use either for validation or verification of a system. At this point it is quite important to mention the two basics of software testing, which are the black box and white box testing.

  • Black box Testing: It is a technique that does not take into account the internal structure or mechanism of a system. It just focuses on the output generated by the system, given a specific input and specific execution parameters. It is also known as functional testing and is mostly used for validation, since its primary function is to check that the system under testing has the expected behaviour [2].
  • White box Testing: It is a technique that focuses on the internal structure and mechanism of a system. It is also known as structural testing and is mostly used for verification, which means that its primary function is to test that the systems works properly and the way the customer wants it to [2].

The article goes on by demonstrating some of the benefits gained when testing is used. The author then presents a list of different types of tests and analyses unit testing by explaining its use and mentioning the advantages and disadvantages that arise from it. This list could be enhanced with many other types of testing, such as Functional Testing, Performance Testing, Usability Testing, Regression Testing and Beta Testing [2]. The choice of testing method depends on many factors: type and size of software project, available budget, etc. The most important thing that one should bear in mind is that the selection should be done in order to satisfy the needs and requirements that are imposed by a project in the best possible way.

My personal experience with software testing is not that long. In fact, I started learning the basic idea and steps of test-driven development two years ago during my industrial placement, but I began becoming really interested in testing when I was triggered to use it in practice during the “Software Testing” course provided by the University of Edinburgh. Based on the knowledge I have gained so far, I believe that the article I am analysing is generally well-written, has a nice structure and presents the importance of software testing and more specifically the benefits gained by the use of unit testing in a clear and straightforward way. However, I would like to provide more details on unit testing regarding the difficulties encountered during its use and the reasons for which someone should use it or even if it should be chosen over other testing techniques.

Is it really difficult to use unit testing?

The actual process of writing tests for individual units of source code is not difficult, or it may be but at least in a reasonable and acceptable way. The factors that make this process be harder that it really is, are basically two. The first one is related to the nature of unit testing. The goal of this technique is to verify that the requirements that have been imposed by a customer are being met. This is easy to say, but most of the times it is quite hard to identify what the requirements are and which requirements are worthy of testing [3].

The second factor is related to the form of the code and how well it is written and structured. Writing unit tests for “spaghetti” code can be really frustrating. Therefore, if the process of unit testing becomes really hard, that might be an indication that the existing code should be refactored [4]. In general, the code should be divided into smaller fragments that do not have multiple responsibilities. In this way, unit testing can take and test each fragment separately in a very efficient manner.

Why bother writing unit tests?

It is true that the creation of unit tests requires additional effort, time, knowledge, skills. Most of the attention, when evaluation of a system is considered, is based on validation and less effort is put into verification. Unit testing is a standard part of the verification process and that is one of the reasons that make it so important and useful [7].

There are numerous other reasons for which unit testing should be used and embedded in the software development process [6][7]:

  • Unit tests are useful for test-driven development. They are written before the source code of a project. This implies that the code written afterwards should be in compliance with them, making the developer to think carefully about how the code should be designed and ensuring in this way that it will be correct as it is written [7].
  • Unit tests are vital to regression testing. When a new feature is added to an already existing software project, new code is added. In many cases, this can lead to changes that break things that were working perfectly before the new addition and for that reason regression testing is necessary. Unit tests can help in this direction especially when automation of regression testing is required [7].
  • Problems can be identified at an early stage of development. Sources of even very simple mistakes and failures, which most of the time are obvious but for some reason overlooked, can be easily avoided by running some unit tests. The sooner they are discovered and fixed, the better for the rest of the development process [6][7].
  • Unit tests can be used as a kind of documentation. Unit tests are closely related to the design process of a system as a whole and more specifically of the components it is comprised of. Even if there is no formal documentation about a specific part of the system, the unit tests written for that can reveal many information about how it is designed and how it is expected to work [6][7].
  • Unit tests can make integration testing easier: Integration testing checks an entire subsystem and ensures that a set of components work well together [5]. If unit testing has checked the individual components for correctness, then integration testing will perform its job in a much easier and faster way [6].

Should I choose it over other testing techniques?

Some of us may wonder why one should use unit testing when there are other techniques, like integration testing that checks subsystems or functional testing that checks a system as a whole. There are two main reasons that have to do with the performance of the tests and the speed of recovery. Functional tests appear to be slow to run. Therefore, one could use unit tests at compile time as an early sanity-check for the code [5]. Moreover, unit tests are able to point the locations of bugs that have been observed in a better way than functional tests do, making it easier for the developers to act fast and fix them [5].

In general, as far as the testing techniques are concerned, there is no point trying to find the best technique among all the existing. Each testing approach serves different needs and in most cases, a combination of them seems to be the best thing to do. Exploiting the advantages offered by each level of testing can ensure correctness in each fragment of a system as well as in the system as a whole [5].

Finally, to Unit Test or not to Unit Test?

After the above analysis, I think that there is no doubt that software testing is crucial and should be embedded in the development process of software systems. More specifically, unit testing offers many benefits and can save much time and trouble from developers. However, we should always bear in mind that every tool is valuable and can reveal its benefits only when used correctly and wisely.

So my advice is: to Unit test and you will not regret it!

[1] Adam Petersen and Seweryn Habdank-Wojewodzki, “Development Fuel: software testing in the large“, July 2012
[2] Rehman Zafar, “What is software testing? What are the different types of testing?“, March 2012
[3] Marc Clifton, “Advanced Unit Testing, Part I – Overview“, September 2003
[5], “Software Testing: A culture of Quality
[6], “Benefits and Drawbacks of Unit Testing“, September 2012
[7] Chris Cannam, “Unit Testing: Why bother?
[8] SAPM Course Blog, “Software testing: an overlooked process during software development”, February 2014


If you are not using a version control system, start doing it NOW!

It all started two years ago, when I was an undergraduate student and I did my industrial placement at a start-up company. I worked there as a front-end web developer and became part of a team consisting of 8 persons, developers, designers, community managers, etc. When I was asked about my background knowledge and skills, the tech leader of the team informed me that I had to become familiar with a version control tool they were using, named GitHub. No problem, but wait. What is a version control tool? What is it used for and how is it used? And finally what is so important about it? Those were some of the first questions for which I needed answers in order to move on.

What is a version control system?

In simple words, a version control (or source control or revision control) system is a system used for the management of files, documents, source code of computer programs or anything else related to a large collection of information. The access to such a system is monitored and its goal is the tracking of all the changes that are made to the source as well as the provision of additional information related to whom made the changes, when and why, as well as references to problems detected and fixed or optimisations achieved by the changes [1].

Version control systems are associated with two basic components: the repository and the working copy. Figure 1 shows the relation between them [2].

  • Repository: It is a database in which all the changes/edits implemented and all the historical versions (also known as snapshots) of a project are stored. The repository may sometimes contain some changes that have not been applied to a working copy that one has created. In this case, an update command can be used, with which the working copy is updated with all the latest edits that have been done by anyone working on the project.

  • Working copy: It is a personal copy of the whole project that one can store locally on a personal computer and work on it without being able to affect the work of others. After the necessary changes have been made, the repository can be updated with them. This is done by using a simple commit command.


version-control-fig1Figure 1


The version history stored in a repository can have two different forms: it is either more linear or has some branches. The second case appears when multiple users make changes on the project at the same time and is known as branching [2].

The history of version control is long and such systems have been used for several decades. Since then, they have evolved a lot and today’s systems are known for their power and robustness [3]. Some of the most popular are: Git, Mercurial and Subversion.

There are two general types of version control systems: the centralised and the distributed (Figure 2 [2]). The difference lies on the number of repositories used. In the first case, each user gets a working copy and there is only one central repository, whereas in the second there is a central repository but each user gets his/her own repository apart from working copy [2], [4].

version-control-fig2 version-control-fig3

Figure 2

Figure 2 also shows the sequence of actions that need to be done in each case for the update of both the central repository and the working copy [2].

Sometimes, when multiple users edit simultaneously the same piece of information, conflicts may arise that cannot be resolved in an automatic way. The version control system cannot decide which version should use in order to update the repository and manual intervention is needed. But, “It is better to avoid a conflict than to resolve it later.” [2]. Therefore, for situations like these, a very useful list of best practices has been suggested [2].

Why use a version control system?

The functions of a version control system give many reasons for which such a system should be used. The most important of these reasons are presented below [3], [5]:

  • Access to a historical versions of a project: This function is extremely useful when data is lost, a computer crashes, or a developer realises that has made a mistake and wants to return to a previous version of the project.

  • Concurrent project changes/edits by multiple team members: Each member has his/her own copy of the project on which works and makes changes without interfering with others’ work. When one’s edits are complete, they are shared and become available to the rest of the team.

  • Merging of the work done simultaneously by multiple team members: Work done by different developers can finally be merged. Merging typically occurs between two branches. It is usually implemented automatically without problems, but when conflicts arise some manual intervention is required.

  • Tagging: It is the creation of a snapshot of a project and it is particularly useful when a team needs to keep track of a product’s releases.

  • Branching: It gives the option of keeping a separate copy of the project that can be individually updated with the latest changes without affecting the work on other branches. This means that a developer can use multiple branches to experiment and work on different features of a system, keeping different releases of it that can be merged only after they have been successfully completed.

The above functions are mostly related to cases in which teams work on a project, but even if one works alone he/she can benefit from source control. There is still the option of checking past versions of code, observing previous changes and experimenting on new features without the fear of losing all the work that has been done so far.

Remarks and conclusions

After some training sessions during which I received the above information, I was ready to use Git in practice. At the beginning I found it a bit confusing and even time-consuming. What are all these merge, push, pull, branch? I couldn’t see the advantages of using it, which the other members of the team had mentioned, and at some point I was just wondering: why are they making my life so complicated? It was when the team started to become bigger and bigger, with new members being in other countries or even continents and working from distance and when I encountered some serious problems with my code, that I started to realise that I was wrong.

Git was not making things complicated. On the contrary, it made the cooperation and communication among the members of the team very easy and straightforward. It offered access to the whole project at any time and an environment for a better project organisation and task allocation (with tags referring to the different nature of tasks and to which member of the team is responsible for a particular task, milestones pointing out to the most urgent tasks). It also allowed the generation of comments at any part of it by anyone in the team, which in most cases were able to lead to faster error detection and code optimisation. After realising and seeing in practice all the advantages of using a version control tool, I started wondering something different: how was I working on projects and writing code all the previous years without such a tool?

Making the comparison between my past and new way of working, my conclusion was always the same: version control is crucial for software development, its benefits should be widely exploited and version control systems should be definitely used in large-scale software projects as well as in smaller ones worked by either teams or solo developers. The selection of the appropriate tool depends on various factors including personal preference, budget, and individual or team needs [3]. I completely disagree with those who are unwilling to incorporate such tools in the development process with various excuses, such as that modifying server code directly saves time or that continuous merging is difficult, time expensive and prone to errors. I must admit that these issues can sometimes be true, but the basic reason for them happening is lack of knowledge and experience which leads to bad use. This means that in some cases, training may be required. As tasks and large-scale projects can get very complicated, the majority of developers suggest that if they want to work in a professional and competent manner, they should get accustomed to source control and start using tools related to it.


[1] Stuart Yeates, “What is version control? Why is it important for due diligence?“, January 2005

[2] Michael Ernst, “Version Control Concepts and Best Practices“, September 2012

[3] Ilya Olevsky, “Why version control is critical to your success?“, March 2013

[4] Martin Fowler, “Version Control Tools“, February 2010

[5] Chris Nagele, “An introduction to version control