Why don’t we use Continuous Integration?

1. Introduction:

While reading some articles or blog posts ([1], [2], [3]) on Continuous Integration it’s easy to come to the conclusion that this approach ensures your code quality, reduces risks and sets programmers minds at ease. It is not dependant on any particular software development methodology. If it is so, I thought that everybody should use it for the projects they carry out. However, it is not a case – there are many companies that still haven’t started development with CI and I question myself “why?”. This article aims to answer that question.

2. Continuous Integration itself:

Continuous Integration [1] is a software development practice which assumes constant integration of code under development. Developers check-ins to the integration server should be made few times a day. Process of code submission is not done until entire compilation, tests and inspection is carried out, proving no problems. Otherwise instant measures should be taken by the developer, to fix any issues. Most important outcome is that there is a great probability that software is functional and the newest version is available to whoever wants it.

3. Background:

Some of the issues that I raise in this article are backed up with examples taken from my experience in a company I worked for. Therefore a brief introduction to it is necessary. The firm is a small start-up project, which manufactures home automation system devices. The software developed there is to be considered as long-term projects and large when compared to the number of people working on them. Main software developed there is:

  • low level programs for field devices (sensors and actuators);
  • applications for controllers running embedded (mainly Linux) operating systems or for mobile devices. This software was the biggest one, constantly extended with newer modules;
  • installation configuration system for Desktop computers.

When I joined that company, there was only one experienced programmer working for it. After some time, he left leaving me as the only permanent one, with others being partially hired for some tasks, when there was a need for that. Don’t take the impression of the company, as working in terms of anti-patterns – I mention here its bad aspects only, since the article requires so. I believe there are many companies like this one, especially small ones, that work the same way.

4. Why CI has not been implemented yet:

Although many of its advantages, CI doesn’t make its way to all the projects being developed. Most people seem to know its advantages and the outcome it gives to the project. It sounds like a panacea to many of the issues concerning software development – it enforces programming good practices and forces developers to be consequent. However, many aspects such as human state of mind of both managers and programmers, developers’ habits and abilities or the company targets (e.g. money first, quality later) stop CI from being the part of their projects.

4.1. Commits frequency:

First of all, I find that basic assumption of CI could be problematic in some circumstances – very frequent code check-outs. To realize it, programmer commits are to be small what poses a difficulty of dividing work into really tiny chunks [2]. Even though developers tend to modularize and divide their work into manageable pieces, it’s often hard to have such granularity of modifications that each is ready in few hours. Every feature being developed requires full life-cycle consisting of consideration, planning and implementation. Especially, if the feature is to be tested before check-in, it’s problematic to reduce its development time.

Moreover, incremental committing just doesn’t work sometimes. This could happen, for instance, if software architecture was not well thought. That kind of issue may be even more frequent in Agile development. Let’s say, one needs to modify part of the code, to adapt it to some changed requirements. If multiple modules are dependent on that change, the tests may fail if the modification is not made in its entirety. Thus bigger change is to be made and it could take more time, before the program works again in as a whole. Then, in order not to break the build, the commits may not be just checked into the main repository for even few day, whatever is needed.

The above example may be a little exaggerated but there is even more to it – human mentality. Programmers, who start working on project parts in the beginning of the day, naturally tend to first analyse the problem and after solution is chosen, implement that strategy. That means most of the commits could occur in the end of the day, which makes integration even more troublesome – they have to get into the queue to the integration server and that takes time.

Increasing commits to the high frequency expected by CI requires significant mind shift. Thus, if there is no mentor, who may help in such process (e.g. experienced manager or some senior developer, who has done CI before) this transition may become a barrier not to be crossed.

4.2. Mainline only:

Another aspect, which seems counter-intuitive and may result in turning CI down, is requirement to keep every code in a mainline. This seems even more insane, when CI assumes that broken build gets highest developer’s priority. The problem here is that it is not uncommon that programmers work on some feature in their spare time, between other, more important works. That feature may not be usable or functional for a longer period, and thus incorporating it would very probably break the build.

In my company, there were many projects (mostly extensions to the controlling projects), that were started because there was a need for such functionality. However, there was no need for them to be completely finished or polished (e.g. easily configurable as the rest of the software), because they were meant to be used just in a few instances. Once proof-of-concept was working, there was no additional time devoted for finishing them and making a cohesive part of the entire project. The reason for this was, of course, limitation of resources. If the need occurred, they were meant to be continued, but until then there they had no appropriate tests and they were not acting as a cohesive part of the project. Thus they were held as a separate branches, which according to [1] is against the CI idea.

The main reason why some developed modules should not be on the mainline is lack of resources devoted by managers to it. If it is not finished or tests are not complete, and other duties have higher priority, committing it to the main code is not what CI expects.

4.3. Automate your tests:

Creating automated tests for every part of the project, when introducing CI, may impose another difficulty. As mentioned, automated tests are very important part of the CI process and they should cover the code pretty well to reduce the risk of potential bugs [4]. Even though it’s a common practice to write tests, some of them are very difficult to automate.

In our team we didn’t have people who were specialized in tests. This led to situation that testing of more complex modules like those responsible for configuration generation (writing configuration validation was as very difficult) or some GUI components was very challenging and time consuming. Given limited number of developers and time, some of them were carried out manually or by the group of people who used applications (like beta-testers). In the second case, if some bug was detected, first its harmfulness was estimated and then fix priority set. So the “small” bug might be there left for appropriate time to come, which is again against the CI ideas.

One may argue, that difficulty of automated tests writing is not a point against Continuous Integration. CI may still be used, just with small coverage or no code tested at all. However, in such a case it is not that efficient (only compilation correctness is proved) and convincing the manager of CI usability is much more difficult. So, when decision about taking the track of CI is to be made, there are less advantages to speak out.

4.4. Appropriate timing:

One important aspect that should be addressed is the time in the project lice cycle, when CI is to be introduced. It is much easier to have the project running with CI, if it was started with CI in mind. Then, all the components of the project (such as tests, scripts, even its architecture) are prepared so that it’s compatible with the idea and tools used for CI.

It is not easy, though, if the project is already ongoing (this could be the case of long-term projects). In this case it could require too significant modifications or time-consuming additions to be made, to consider CI as a good solution (of course this approach is short-sighted). In such situations, it should be estimated, what are the advantages and costs of switching to CI – the latter ones could be too high, so even if there is a team will, managers don’t agree.

4.5. Direct costs:

Not to be forgotten obstacle of introducing CI in the company may be the cost of integration servers and time cost, required to run these machines [5]. There are many open source projects such as Cruise Control [6] that could be used for that purpose. However, there is still the need for some training (especially, when no one in the team has done that before). The hardware cost depends on the type of project being tested. If it’s small and tests are not resources consuming, some old computer may be used [7]. Otherwise, newer machine should be used. In cases of more complicated processes, where build time exceeds 10 minutes limit [1], the build should be divided into few steps over multiple computers. Again, this may pose significant costs, especially for smaller companies. Larger ones, probably won’t see that as a drawback, since in comparison to money they pay as salary to the software engineers is much more insignificant cost.

4.6. Project ROI:

Last thing, not connected to large projects, but worth mentioning while CI is discussed is return of investment [5]. In case of small programs with short life cycle period, especially for those that long term maintenance is not assumed, the burden of setting things up could be too high.

5. Summary:

In my opinion, presented issues are the most important ones that prevent software developer teams from  introducing Continuous Integration. It is worth for a team considering using CI to detect if these apply to them. If so, they should not be discouraged but try more step-by-step approach. Continuous Integration has significant advantages which are especially obvious for projects where more than one developer work on. It is worth the effort in a longer run, since it reduces integration risks, gives better overview of the project state, encourage developers to work proactively and not to procrastinate. It is a really good way to achieve high quality product just by sticking to some general rules.

6. Bibliography:

[1] http://www.martinfowler.com/articles/continuousIntegration.html
[2] http://www.jrothman.com/blog/mpd/2011/12/is-the-cost-of-continuous-integration-worth-the-value-on-your-program-part-1.html (also 2nd and 3rd parts);
[3] Paul Duvall, Stephen M. Matyas, and Andrew Glover. 2007. Continuous Integration: Improving Software Quality and Reducing Risk (The Addison-Wesley Signature Series). Addison-Wesley Professional
[4] http://www.ibm.com/developerworks/java/library/j-ap11297/index.html
[5] http://stackoverflow.com/questions/214695/what-are-some-arguments-against-using-continuous-integration
[6] http://cruisecontrol.sourceforge.net/
[7] http://www.jamesshore.com/Blog/Continuous-Integration-on-a-Dollar-a-Day.html
[8] http://callport.blogspot.co.uk/2009/02/continuous-integration-should-we-always.html

Continuous Delivery: An Easy Must-Have for Agile Development


Everybody working in software development has heard about it when talking about software quality assurance: Terms that begin with “Continuous” and end with “Integration”, “Build”, “Testing”, “Delivery”, “Inspection”, just to name a few examples. The differences of these terms are sometimes hard to tell and the meanings vary, depending on who uses them. In this post, the easy implementation of Continuous Delivery is discussed.

For clarification, Continuous Delivery is defined as described by Humble and Farley in their book “Continuous Delivery”[1]. In this highly recommendable book, a variety of techniques (including all other terms mentioned in the previous paragraph) to continuously assure software quality are described.[1] Adapting these techniques does not require much effort nor experience and should be done in every software project. Especially in large-scale software projects, this technique helps to maintain high software quality.

Errors First Discovered by the Customer

In a software project with a lot of engineers working on the same code base, unexpected side effects of source code changes are very likely to result in erroneous software. If there are automated unit tests, most of these errors are detected automatically. However, unfortunately there are some unexpected run time side effects that only occur when the software is running on a particular operating system. In a normal development process, such errors are detected at the worst point possible: when the customer deploys or uses the software. This results in high expenses for fixing the issue urgently.

In order to prevent those kinds of errors, Continuous Delivery has developed. As Carl Caum from PuppetLabs describes it in a nutshell, Continuous Delivery does not mean that a software product is deployed continuously, but that it is proven to be ready for deployment at any time. [2] As described in [3], an article by Humble and Molesky, Continuous Delivery introduces automated deployment tests for achieving this goal of deployment-readiness at any time. [3] This post focuses on those deployment tests as it is the core of Continuous Delivery.

Implementing and Automating Continuous Delivery

To prove if software is working in production, it needs to be deployed on a test system. This section explains how to implement such automatic deployment tests.

Firstly, the introduction of a so-called DevOps culture is useful. This means a closer collaboration of between software developers and operation staff.[3] Each developer should understand the basic operation tasks and vice versa, in order to build up sophisticated deployments. Even though [3] describes this step as necessary, from my point of view such a culture can be advantageous for Continuous Delivery but is not mandatory for succeeding. It is not mandatory, because automated deployment tests can be developed without the help of operations, although it is certainly more difficult. More detailed information about DevOps can for example be found in the book “DevOps for Developers” by Michael Hüttermann [4].

Secondly, as explained in a blog post by Martin Fowler, [5], it is crucial to automate everything within the process of delivering software. [5] The following example shows a simplified ideal Continuous Delivery process:

  1. Developer John modifies product source code
  2. Test deployment is triggered automatically due to a change in the version control system
  3. Deployment is tested automatically, giving e-mail feedback to John that his source code breaks something in production
  4. John realizes he forgot to check in one file and fixes the error promptly
  5. Steps 2 and 3 repeat, this time John does not receive an email as the deployment tests do not find misbehaviour of the product.

For example, such a process can be automated completely easily with the software Jenkins[6] and its Deployment Pipeline Plugin. Detailed instructions for such a setup can be found in the blog post [7].

However, such a continuous process is not a replacement for other testing (Unit Testing etc.) but an addition to it. It is an additional layer of software quality assurance.

Steven Smith states in his blog post [8] that Continuous Delivery in an organisation requires radical organisational changes and is therefore difficult to introduce to a company. I disagree with that partly because it depends on the type of the specific company. If a company uses old fashioned waterfall-like development methods, Smith is right with that point. However, when concerning an agile developing software company, Continuous Delivery is nothing more than more automated testing. It does not require people changing their habits in this case, as the developers are used to Continuous Testing methods. The only additional work is to maintain deployment scripts and to write deployment specific tests.

Configuration Management Systems and Scripting

In order to perform deployment tests, scripts are needed for the automation. These scripts can be written in any scripting language, for example in Bash (shell-scripts). However, there are more sophisticated approaches using so-called Configuration Management Systems such as Puppet[9] or Chef[10]. According to Adam Jacob’s contribution to the book “Web Operations”, section “Infrastructure as Code”[11], the use of a Configuration Management System’s scripting language leads to the following advantages:

Firstly, such deployment scripts are declarative. That means that the programmer only describes what the system should look like after executing the script, without the need of describing how it should be done in detail. Secondly, the scripts are idempotent, so they only apply the modifications to the system that are necessary. Furthermore, executions of the same script on the same host always lead to the same state, regardless how often a script is executed. [11]

For these reasons, Configuration Management System’s scripting opportunities are superior to bash scripting. Furthermore, they provide a better readability, maintainability and a lower complexity of the scripts compared to similar Bash-scripts.


According to my software business experience, it is easy to implement Continuous Delivery step by step into an agile thinking company. The main things to focus on are the following: Firstly, such an implementation should be fully automated and integrated with the version control system. Secondly, a Configuration Management System is highly recommendable because of easier deployment scripting. Furthermore, such scripts provide better maintainability, which saves resources.

The goals achieved by the implementation of Continuous Delivery are twofold: Firstly, the release process is optimised, leading to the possibility to release almost automatically. Secondly, developers get immediate feedback when the source code does not work in a production-like environment.

In conclusion, Continuous Delivery thereby leads to crucially better software and can be introduced into an agile operating company without much effort.


[1] J. Humble and D. Farley, Continuous Delivery: Reliable Software Releases through Build, Test, and Deployment Automation, Pearson Education, 2010.
[2] C. Caum, “Continuous Delivery Vs. Continuous Deployment: What’s the Diff?,” 2013. [Online]. Available: http://puppetlabs.com/blog/continuous-delivery-vs-continuous-deployment-whats-diff [Accessed 2/2/2014].
[3] J. Humble and J. Molesky, “Why Enterprises Must Adopt Devops to Enable Continuous Delivery,” Cutter IT Journal, vol. 24, no. 8, p. 6, 2011.
[4] M. Hüttermann, DevOps for Developers, Apress, 2012.
[5] M. Fowler, “Continuous Delivery,” 2013. [Online]. Available: http://martinfowler.com/bliki/ContinuousDelivery.html [Accessed 2/2/2014].
[6] “Jenkins CI,” 2014. [Online]. Available: http://jenkins-ci.org/ [Accessed 2/2/2014].
[7] “Continuous Delivery Part 2: Implementing a Deployment Pipeline with Jenkins « Agitech Limited,” 2013. [Online]. Available: http://www.agitech.co.uk/implementing-a-deployment-pipeline-with-jenkins/ [Accessed 2/2/2014].
[8] S. Smith, “Always Agile · Build Continuous Delivery In,” 2013. [Online]. Available: http://www.stephen-smith.co.uk/build-continuous-delivery-in/ [Accessed 3/2/2014].
[9] “What is Puppet? | Puppet Labs,” 2014. [Online]. Available: http://puppetlabs.com/puppet/what-is-puppet [Accessed 2/2/2014].
[10] “Chef,” 2014. [Online]. Available: http://www.getchef.com/chef/ [Accessed 2/2/2014].
[11] A. Jacob, “Infrastructure as Code,” in Web Operations: Keeping the Data On Time, O’Reilly Media, 2010.