Real-Time Collaborative Programming in Software Business?


Real-time collaborative programming describes a way of concurrent programming, where all developers in a software project can edit the source code at the same time, propagating the changes in real time. As this can be a very interesting approach for improving the collaboration between developers, it might be worth implementing such a technology into a software project in the industry. Even though there also exists an implementation of this technique for the programming language SmallTalk [1], in this blog post the Web-based collaborative Java IDE “Collabode” [2] is analysed on its usefulness in software business.

While there are a few advantages of this form of real-time collaboration in software engineering, such as improved pair-programming, the disadvantages outweigh them. As the main disadvantage is not being able to use a Version Control System (VCS) in a normal way, also the benefits of using a VCS are lost. In summary, the approach considered has too much drawbacks to use it in software business.


Screenshot of Collabode, Source: [2]

Overview of Collabode

In 2011, Goldman et al published the web-based Java IDE Collabode for real time collaboration in the browser, based on Eclipse. It makes concurrent editing of the source code files for an arbitrary number of developers possible. [3] The behaviour of the system is thereby similar to Google Docs, which provides real-time concurrent document editing.

Changes in the source code are propagated to the other developers as soon as possible, but with a certain error-awareness, so that a change is propagated immediately once it does not cause compiler errors. If a change results in errors, the change is kept in a local working copy, so the engineers do not break their common build. [4]

The underlying multi-user editor is Etherpad [5]. [4] It supports simple versioning and version access of single files, but other functions of a modern VCS are not included.


According to [3], developers can benefit from using this approach in the following ways.

Firstly, it is possible to ask colleagues for help for small code snippets easily (“micro outsourcing”. Secondly, it enables Pair Programming on different machines. Finally, this approach improves teaching possibilities. [3]

In addition to these points, such a collaborative system can be particularly helpful when developers are not in the same place. So Pair Programming can be achieved even if the colleagues are for example in different countries.

Furthermore, such an approach is improving the ability to teach new colleagues developing the product. As the new developer can work on his own computer while collaborating with a senior team partner, he arguably gets more involved into the learning process. Moreover, this is also more convenient for the senior developer and may therefore reduce a possible unwillingness to teach, as he is able to work from his own computer while teaching.

Finally, with Collabode a developer does not need to manually commit to a Version Control System, which saves time. [4]

Minor Drawbacks

However, there are several disadvantages of using this real-time collaborative programming tool:

Firstly, even though Collabode uses Eclipse as a basis, not the full Eclipse IDE is implemented in the browser-based user interface. [4] Therefore, some useful IDE features cannot be used with this approach.

Secondly, more communication is necessary between the developers. Considering the case that developer A is implementing something that uses a class that is being refactored by developer B in the same time, A might get unexplainable results. Therefore practically every developer in team must know what the others are doing at the moment. This communication overhead can lead to less efficiency.

Thirdly, the approach is not tested in large-scale projects, so there is still some research to be done here to be sure the system works in a large software project.

Main Disadvantage: VCS limitations

The main drawback of using this approach is that the normal use of a Version Control System is not possible. This section is focused on the most important reasons for using a VCS.

According to Somasundaram ( [6] ) the main advantages of using a Version Control System are the opportunity to keep different versions of the source code and the possibility to roll back to a previous state as a failsafe plan. [6]

Concerning the first point, restoring previous versions of single files is possible due to the functionalities of EtherPad. However, when changing multiple files in one change, it is not traceable that these individual changes belong to one logical union like when committing manually. It is therefore difficult to figure out a consistent state to roll back to. Furthermore, the lack of commit comments makes such an operation even more difficult. So, both main advantages of a Version Control System mentioned above cannot be used in a practical way in Collabode.

Moreover, since there are no commit comments, an important source of information is missing, for example which bug the change refers to.

Finally, this way of micro-committing everything automatically makes Continuous Integration very difficult. It is much harder to find a commit that breaks a test when there are this much uncommented commits.


While the real-time collaborative programming approach like implemented in Collabode is certainly suitable for teaching and pair programming, its usefulness in software industry is very limited. This is mainly the case because it circumvents the principles of committing to a Version Control System. If a company wants to benefit from the advantages of this new way of programming, it loses the much more important main advantages of using a normal Version Control System. Therefore I do not recommend using such an approach in a normal business software development process.

However, if a company focuses more on the improved collaboration and less on the benefits of using a VCS, using Collabode can be taken into consideration. Also a partial use of this system for training sessions in the industry can be advantageous.


[1] J. Jordan, “Wolf Pack Programming? | Cincom Smalltalk,” 2010. [Online]. Available: [Accessed 3 3 2014].
[2] “Collabode – Collaborative Coding in the Browser,” 2012. [Online]. Available: [Accessed 3 3 2014].
[3] M. Goldman, G. Little and R. C. Miller, “Collabode: Collaborative Coding in the Browser,” in Proceeding of the 4th International Workshop on Cooperative and Human Aspects of Software Engineering, New York, 2011.
[4] M. Goldman, G. Little and R. C. Miller, “Real-time collaborative coding in a web IDE,” in Proceedings of the 24th annual ACM Symposium on User Interface Software and Technology, New York, 2011.
[5] “Etherpad,” 2014. [Online]. Available: [Accessed 7 3 2014].
[6] R. Somasundaram, Git, Birmingham: Packt Pub., 2013




Continuous Integration: Software Quality vs Resources

This is a response article to “Why don’t we use Continuous Integration?” [1] by user s1367762.


In his post [1], the author describes his own working experience in a small start-up company with mostly one developer at a time working on different small projects. As not all projects were using Continuous Integration, the author states a variety of reasons why not every software engineering company and project use it so far, based on his own experience.

This post discusses these various arguments that may prevent a company of implementing Continuous Integration for a software project. Generally, it is always a trade-off between software quality and resources.

Commits Frequency

The first point is that it may be difficult to make small incremental commits, when a change requires more time to make the whole software work again eventually. Commits in between would therefore break the build and the whole Continuous Integration pipeline.

I absolutely agree with that argument. When for example a new module is developed, at first the whole software may be broken. Not committing to the main line at least daily contradicts the basic rules of Continuous Integration; see for example [2] by Martin Fowler.

However, is that really a problem? From my personal business experience, it is very common and easy to implement a new feature in a new version control system branch, often referred to as a feature branch. This branch may have own Continuous Integration processes to ensure the quality, but in the first place this does not break the main build. When the feature is in a stable condition, the branch can be merged into the main product and be part of the general CI process. Martin Fowler’s blog post [3] describes this process in more detail.

Mainline Only

The second point mentioned by s1367762, is that there may be code that is not really part of the software project, for example used only by a few customers for very special use cases. Therefore, it does not make sense to commit this code to the main line as suggested by Continuous Integration.

I absolutely understand this point. However, if there is some code that is not really part of the product, there is no need for Continuous Integration for these few special modules. From my point of view, CI can be implemented also when ignoring such modules.

Automated Tests

I absolutely agree on this point, especially when dealing with GUI components, automating Tests is time-consuming and difficult. Furthermore, without having good code coverage Continuous Integration is less effective. However, it is better than no Continuous Integration at all. Also, this is clearly a trade-off between saving time not automating tests and final software quality.

Appropriate Timing, Direct Costs and Project ROI

In these three points the author states that it is more difficult to implement CI into an existing project that started without it. He furthermore describes the costs of learning to implement CI and operating build and test machines as expensive. Finally, he contends that implementing Continuous Integration is not worth the effort for short term project without long term maintenance.

All these points are completely understandable. To my mind, they all lead to one question for the project manager: How important is the quality of my project? If it is not a major requirement, for example if the software is being used only for a short period of time, Continuous Integration is not worth implementing.


In summary, s1367762 demonstrates well why Continuous Integration is not always a good idea in software projects. However, especially for the first point regarding commits frequency, it is easy to work around it by using feature branches without completely losing the idea of Continuous Integration. Furthermore, if there are modules that do not really belong to the project, they can be easily ignored for the CI approach. From my point of view, a partly implemented CI is much better than no CI at all.

Finally, everything depends on the management decision if maintaining quality by investing time and money is wanted for a project. The company I worked for never questioned the importance of quality, so Continuous Integration was implemented in sophisticated detail. However, if quality is not a major point in some projects, as s1367762 describes according to his business experience, it is absolutely reasonable not to implement Continuous Integration for some projects.


[1] s1367762, “Why don’t we use Continuous Integration? | SAPM: Course Blog,” [Online]. Available: . [Accessed 27 2 2014].
[2] M. Fowler, “Continuous Integration,” 2006. [Online]. Available: . [Accessed 27 2 2014].
[3] M. Fowler, “FeatureBranch,” 2009. [Online]. Available: . [Accessed 27 2 2014].




Continuous Delivery: An Easy Must-Have for Agile Development


Everybody working in software development has heard about it when talking about software quality assurance: Terms that begin with “Continuous” and end with “Integration”, “Build”, “Testing”, “Delivery”, “Inspection”, just to name a few examples. The differences of these terms are sometimes hard to tell and the meanings vary, depending on who uses them. In this post, the easy implementation of Continuous Delivery is discussed.

For clarification, Continuous Delivery is defined as described by Humble and Farley in their book “Continuous Delivery”[1]. In this highly recommendable book, a variety of techniques (including all other terms mentioned in the previous paragraph) to continuously assure software quality are described.[1] Adapting these techniques does not require much effort nor experience and should be done in every software project. Especially in large-scale software projects, this technique helps to maintain high software quality.

Errors First Discovered by the Customer

In a software project with a lot of engineers working on the same code base, unexpected side effects of source code changes are very likely to result in erroneous software. If there are automated unit tests, most of these errors are detected automatically. However, unfortunately there are some unexpected run time side effects that only occur when the software is running on a particular operating system. In a normal development process, such errors are detected at the worst point possible: when the customer deploys or uses the software. This results in high expenses for fixing the issue urgently.

In order to prevent those kinds of errors, Continuous Delivery has developed. As Carl Caum from PuppetLabs describes it in a nutshell, Continuous Delivery does not mean that a software product is deployed continuously, but that it is proven to be ready for deployment at any time. [2] As described in [3], an article by Humble and Molesky, Continuous Delivery introduces automated deployment tests for achieving this goal of deployment-readiness at any time. [3] This post focuses on those deployment tests as it is the core of Continuous Delivery.

Implementing and Automating Continuous Delivery

To prove if software is working in production, it needs to be deployed on a test system. This section explains how to implement such automatic deployment tests.

Firstly, the introduction of a so-called DevOps culture is useful. This means a closer collaboration of between software developers and operation staff.[3] Each developer should understand the basic operation tasks and vice versa, in order to build up sophisticated deployments. Even though [3] describes this step as necessary, from my point of view such a culture can be advantageous for Continuous Delivery but is not mandatory for succeeding. It is not mandatory, because automated deployment tests can be developed without the help of operations, although it is certainly more difficult. More detailed information about DevOps can for example be found in the book “DevOps for Developers” by Michael Hüttermann [4].

Secondly, as explained in a blog post by Martin Fowler, [5], it is crucial to automate everything within the process of delivering software. [5] The following example shows a simplified ideal Continuous Delivery process:

  1. Developer John modifies product source code
  2. Test deployment is triggered automatically due to a change in the version control system
  3. Deployment is tested automatically, giving e-mail feedback to John that his source code breaks something in production
  4. John realizes he forgot to check in one file and fixes the error promptly
  5. Steps 2 and 3 repeat, this time John does not receive an email as the deployment tests do not find misbehaviour of the product.

For example, such a process can be automated completely easily with the software Jenkins[6] and its Deployment Pipeline Plugin. Detailed instructions for such a setup can be found in the blog post [7].

However, such a continuous process is not a replacement for other testing (Unit Testing etc.) but an addition to it. It is an additional layer of software quality assurance.

Steven Smith states in his blog post [8] that Continuous Delivery in an organisation requires radical organisational changes and is therefore difficult to introduce to a company. I disagree with that partly because it depends on the type of the specific company. If a company uses old fashioned waterfall-like development methods, Smith is right with that point. However, when concerning an agile developing software company, Continuous Delivery is nothing more than more automated testing. It does not require people changing their habits in this case, as the developers are used to Continuous Testing methods. The only additional work is to maintain deployment scripts and to write deployment specific tests.

Configuration Management Systems and Scripting

In order to perform deployment tests, scripts are needed for the automation. These scripts can be written in any scripting language, for example in Bash (shell-scripts). However, there are more sophisticated approaches using so-called Configuration Management Systems such as Puppet[9] or Chef[10]. According to Adam Jacob’s contribution to the book “Web Operations”, section “Infrastructure as Code”[11], the use of a Configuration Management System’s scripting language leads to the following advantages:

Firstly, such deployment scripts are declarative. That means that the programmer only describes what the system should look like after executing the script, without the need of describing how it should be done in detail. Secondly, the scripts are idempotent, so they only apply the modifications to the system that are necessary. Furthermore, executions of the same script on the same host always lead to the same state, regardless how often a script is executed. [11]

For these reasons, Configuration Management System’s scripting opportunities are superior to bash scripting. Furthermore, they provide a better readability, maintainability and a lower complexity of the scripts compared to similar Bash-scripts.


According to my software business experience, it is easy to implement Continuous Delivery step by step into an agile thinking company. The main things to focus on are the following: Firstly, such an implementation should be fully automated and integrated with the version control system. Secondly, a Configuration Management System is highly recommendable because of easier deployment scripting. Furthermore, such scripts provide better maintainability, which saves resources.

The goals achieved by the implementation of Continuous Delivery are twofold: Firstly, the release process is optimised, leading to the possibility to release almost automatically. Secondly, developers get immediate feedback when the source code does not work in a production-like environment.

In conclusion, Continuous Delivery thereby leads to crucially better software and can be introduced into an agile operating company without much effort.


[1] J. Humble and D. Farley, Continuous Delivery: Reliable Software Releases through Build, Test, and Deployment Automation, Pearson Education, 2010.
[2] C. Caum, “Continuous Delivery Vs. Continuous Deployment: What’s the Diff?,” 2013. [Online]. Available: [Accessed 2/2/2014].
[3] J. Humble and J. Molesky, “Why Enterprises Must Adopt Devops to Enable Continuous Delivery,” Cutter IT Journal, vol. 24, no. 8, p. 6, 2011.
[4] M. Hüttermann, DevOps for Developers, Apress, 2012.
[5] M. Fowler, “Continuous Delivery,” 2013. [Online]. Available: [Accessed 2/2/2014].
[6] “Jenkins CI,” 2014. [Online]. Available: [Accessed 2/2/2014].
[7] “Continuous Delivery Part 2: Implementing a Deployment Pipeline with Jenkins « Agitech Limited,” 2013. [Online]. Available: [Accessed 2/2/2014].
[8] S. Smith, “Always Agile · Build Continuous Delivery In,” 2013. [Online]. Available: [Accessed 3/2/2014].
[9] “What is Puppet? | Puppet Labs,” 2014. [Online]. Available: [Accessed 2/2/2014].
[10] “Chef,” 2014. [Online]. Available: [Accessed 2/2/2014].
[11] A. Jacob, “Infrastructure as Code,” in Web Operations: Keeping the Data On Time, O’Reilly Media, 2010.