All software project managers and their pets will tell you that the future of software development lies in Continuous Delivery. However, the uptake rates of Continuous Delivery and the accompanying ‘DevOps’ culture is still less than a majority in large-scale firms (a recent survey by Perforce found that only 28% of US and UK firms regularly use automated delivery systems across all their projects). Why is such a useful concept not already common place? By focusing on the prize the end of the road, are we ignoring some of the barriers along the way?
Where are we going?
Continuous Delivery (CD) is often confused with Continuous Integration, Continuous Deployment and ‘DevOps’, and is definitely in need of some clarification. Continuous Delivery is a software development methodology that attempts to automate the build, test and deploy stages of the production pipeline. As opposed to Continuous Integration, CD also incorporates automated testing, especially acceptance tests to test business logic. Continuous Deployment on the other hand is an extension of CD, which takes the software and automatically releases it directly to users, which may not always be a practical option for enterprises .
‘DevOps’, on the other hand, is not a methodology or process. Rather it is a philosophy forged between the interaction of development and operations teams. It spawned from a yearning to avoid the long delays in delivery and software maintenance through more collaboration and knowledge-sharing between the two departments. Although only recently gaining popularity, the essential principles of DevOps has existed internally in many organizations, simply as a response to business demand and competitive pressure. In this regard, CD embodies the DevOps philosophy and in turn DevOps is an impetus to adopt CD.
The benefits of Continuous Delivery should be clear to most readers. By releasing features and bug-fixes to a production-like staging environment rapidly and reliably, developers can get immediate feedback on the readiness of a release candidate. Also, automated integration keeps change sets relatively small, so when problems do occur, they tend to be less complex and easier to troubleshoot. At the business level, CD ensures that each updated version of the program is a potential new release. However, despite its myriad of benefits, CD still faces numerous barriers to implementation.
It’s in the way you walk
One of the main barriers to CD is the culture of an organization. This is highlighted in the misalignment of incentives between development and operations teams.
Developers are measured for performance against the quality and quantity of the software features provides to its users. On the other hand, operations teams are concerned with the stability of the system after it has been delivered to its users. This creates a tug-of-war between the dev. team having little incentive to ensure stability, or even ease the workload for operations (lack of clear documentation is an example) and likewise ops teams care little for the frequency of new releases. I believe, this disparity stems right from the level of business-IT interaction (42% of business leaders view their IT department as order-takers rather than partners in innovation ), and filter down through specialized requirements enforced through reporting structures and hierarchy within an organization. The only solution is to view the software project holistically as employing a single ‘delivery team’ rather than a string of component teams. This encourages team members to care about the effects of their work on the final release candidate, even if it was traditionally outside their ‘job description’.
Duvall  describes some effective methods such as training multi-skilled team members (e.g. experience with infrastructure and databases) and cross-functional teams. Amazon, for example, took this idea to heart in their “you build it, you run it” idea, trusting and encouraging developers to take the software all the way to production . Then again, Rob England’s damning review  of ‘DevOps’ claims that splitting developers based on expertise is backed by the sound principle of economics of scale, and ‘DevOps’ implicitly assumes an unrealistic level of proficiency and enthusiasm in all employees to collaborate without supervision.
Two feet, two different shoes
The second factor in slowing down progress to continuous delivery is the diversity of tools and processes spawning in the pipeline, the lack of integration between these tools and a lack of maturity in their usage.
Take the typical tool set used to implement a software pipeline: Jenkins is used to build and deploy against an environment, with Maven performing the build stage and Capistrano performing the deployment, so each environment is individually configured by different developers. When the software is delivered to operations, weeks later, AnthillPro is used to run the deployment requiring further manual configuration changes . These overlapping configurations often lead to problems in one environment that cannot be mimicked in the other, late discovery of defects and horrendous confusion in finding the cause of the problem.
Businesses quoted two of the top five challenges to implementing CD as ‘integrating automation technologies’ and not having the skilled people and correct platforms in place . 20% of development teams lacked maturity even in Source Code Management  and developers stressed that “tools were sometimes not mature enough for the task required of them”. It seems that in some cases, adopting an automated system simply replaces frustration over bug-fixes with frustration over getting the modules of the automation system to work together.
To overcome a part of this problem organizations need a defined platform strategy and architecture. It will need to make hard decisions on which tools to master and which ones to abandon, but of course this is sure to spark dispute amongst teams in the company.
The brick wall
For enterprises attempting to sprint down the road of CD, most run immediately into a wall; a large, monolithic enterprise system that is tightly coupled and heavily customized. Defining an automated process that copes with this complexity is a serious challenge, which most managers shy away from.
This is particularly evident when relational databases need to be incorporated to the CD process. Once the first version has been in use, the coded assets (e.g. stored procedures), the domain data (seed data that supports an application) and transactional data (business records) all become integral to the system. The database schema is difficult to change without the risk of losing this valuable data. Although keeping SQL scripts under source code control aid in automating database configuration, it misses the issue of migrating data from existing systems and getting it to remain consistent. On the other hand, CD demands production-like environments for testing which results in developers running a personal, isolated database. Despite the existence of tools to handle this problem (e.g. RedGates’ SQL Compare), none have become mature enough for wide spread popularity, and keeping track of all these databases becomes a steep task .
Waiting for the Green Light
Continuous Delivery and the ‘DevOps’ movement has also been heavily criticized on its apparent inability to scale with the number of developers and the size of the project. The main problems surround continuous integration and testing.
As the dimensions of the project increase the code-base expands and commit frequency increases. As the individual version of the project take longer to compile, test, deploy and deliver feedback, it creates a ‘bottleneck’ in the pipeline. As the team is forced to wait for broken build fixes in order to commit their own versions, the skill of the individual developer impacts on the performance of the team as a whole . Once the build takes more than ten or fifteen minutes, developers stop paying attention to feedback and may be incentivized to branch and merge later, which undermines the very principle of continuous delivery.
As Wheeler  suggests, modularization of the code-base could be a solution to the problem, whereby different teams commit to different mainlines of independent components. Unfortunately, the extent to which you can modularize a project without overlaps depends on the project itself. Moreover, modularization brings its own set of challenges such as building interfaces and coordinating between teams, and with different modules advancing at a different pace, there tends to be a heavy reliance on integration testing.
This brings us to another pain point in CD; managing the speed of automated testing against its coverage. Automated tests are only as good as the test-cases that underlie them and may give a false sense of security over build quality. Getting a wider coverage from unit and integration tests is good advice but as the number of tests multiply, so does the length of the delivery time. For example, integration tests and acceptance tests that touch the database or require communication between modules are vital to ensure the system works as a whole and ensure business value, but could take hours to complete in a full testing suite .
This issue can be handled somewhat with an architectural solution, using parallelization and throwing more resources into the mix (e.g. having a dedicated machine for testing), but again this takes a larger initial investment of time and money. Other suggestion such as load partitioning, and parallel builds are a better solution, but again take time and expertise to develop and a whole host of more tools to juggle.
Despite the potential benefits offered by changing the pipeline process towards Continuous Delivery, there are certainly many painful barriers to overcome. Organizational issues and culture are the first of these to surpass with better management practices, a change in philosophy and a clear vision of what to achieve. Issues with tool integration and efficient code-integration and testing suites should be approached both at a management and technical level. However, there are signs that as more software-as-a-service provides develop under pressure from the CD movement, there will be less friction from the tool and infrastructure that support Continuous Delivery.
Perhaps it is incorrect to view Continuous delivery as an end goal in a long road, rather what’s important is the journey toward CD. As Jez Humble  puts it “Given your current situation, where does it hurt the most? Fix this problem guided by the principles of continuous delivery. Lather, rinse, repeat. If you have the bigger picture in mind, every step you take towards this goal yields significant benefits.”
 Caum C. (2013). Continuous Delivery vs Continuous Deployment: What’s the Diff. Accessed on 11/2/2014 at < http://puppetlabs.com/blog/ >
 DevOpsGuys (2013). Continuous Delivery Adoption Barriers. Accessed at 3/2/2014 on <http://blog.devopsguys.com/ >
 Duvall P. (2012). Breaking down Barriers and reducing cycle times with devops and continuous delivery. Gigaom Pro. Accessed on 11/2/2014 at < www.stelligent.com/blog >
 England R. (2011). Why DevOps won’t change the world any time soon. Accessed on 10/2/2014 at < http://www.itskeptic.org >
 Evans Research Survey of Software Development Professionals (2014), “Continuous Delivery: The new normal for software development“, Perforce, accessed on 7/2/2014 < http://www.perforce.com/continuous-delivery-report>
 Forrester Consulting (2013). Continuous Delivery: A maturity assessment model. Thoughtworks
 Humble. J (2013). Continuos Delivery. Accessed on 11/2/2014 (video) at <http://www.youtube.com/watch?v=skLJuksCRTw>
 Magennis T. (2007). Continuous Integration and Automated Builds at Enterprise Scale. Accessed on 9/2/2014 at< http://blog.aspiring-technology.com/ >
 Mccarty B. (2011). Amazon is a technology company. We just happen to do retail. Accessed at 10/2/2014 at <http://thenextweb.com/>
 Pais M. (2012). Is the Enterprise ready for DevOps. Accessed on 9/2/2014 at <http://www.infoq.com/articles/>
 Viewtier Systems (2012). Addressing performance problems of continuous integration. <accessed on 13/2/2014 at < http://www.viewtier.com>
 Wheeler W. (2012). Large-Scale continuous integration requires code modularity. Accessed on 5/2/2014 at < http://zkybase.org/blog >