Duality of Project Management: Objective vs. Subjective factors

1. Introduction

Objective factors = data and procedures regarding the project management process
Subjective factors = human component in the development of the project

Project management is an important part of any large-scale project that requires the coordinator to oversee all the activities in order to synchronize them efficiently. Several techniques and analyses have been developed in order to aid the managers in optimizing costs and time.

So the main purpose of this is to help managers develop and implement complex plans of execution and to make the most out of the available resources. However, it has been viewed from a logistic point of view, rather than a motivational one. This article plans to uncover another side and maybe a potential advantage of such system that has not yet been widely discussed.

Instead of focusing only on the planning and coordinating the sub-tasks of a large-scale project, managers should also consider the human factor involved in such a project. As a developer of a small part of the project (relative to the whole project), it’s easy to lose track of the main goal and only focus on the specific milestones that you have to achieve. This should be the point, right? Do something and not worry about what others are doing. Well, I say there is another part that few people consider. What if the developer would show more motivation if he knows how the project should develop, what is the final goal and how would his work be reflected in the end?

In the next sections I will describe the standard techniques of dealing with a large-scale project from a management point of view and what benefits can be added to that already optimized process.

2. What is Project Management?

In principle, project management is responsible of dividing the task into small, “atomic”, goals and plan them with respect to the availability of resources. In other words, assign people to tasks in a specific order with some given constraints (one or more tasks must finish before another one can commence).

Over the years, this approach has been proved to be very useful, especially for the managers, who can have a broad view over the project and adjust the variables that compose it in order to reach the optimal solution.
Next, we will discuss only those aspects of a project that can be improved by stimulating its “roots”, the individual developers. The classical model is divided into 5 main categories: Planning, Organising, Communication, Control and Evaluation. We will discuss only three of these: planning, organising and communication.

3. Project Management methods: can objective techniques be translated into subjective ones?

Regarding procedures for tackling the project, two main methods stand out: CPM (Critical Path Method) and PERT (Performance Evaluation and Review Technique). CPM uses deterministic estimates of task duration and focuses more on the trade-off of cost and time. PERT, also using estimates of task duration, adopts a more probabilistic approach, to predict the likelihood of on-time project completeness. Usually the two techniques are used together to output more precise data. Specific software programs, currently on the market implement these methods in order to reduce the complexity of the job.

This combined approach is divided into a series of sub-tasks: defining the project, splitting the project into sub-projects, defining dependencies etc. This gives a clear guideline for the manager, telling him what to do at each step. However, the developer is only concerned with following the instructions, without having the whole “picture” in mind. Therefore he doesn’t always know how he contributes to the whole project.

It is important to note at this point that the participants to the project do not need to know every aspect of the management process, but only those parts that relate directly to them. In order to discuss on these aspects, some technical details of the management process need to be defined.

Every process has a start and end state: these represent the terminal nodes of the process network. Each task is defined by: earliest start time, latest start time, earliest finish time, latest finish time and duration

  • Earliest finish time = earliest start time + duration
  • Latest start time = latest finish time – duration
  • Earliest start time = largest earliest finish time of all immediate predecessors
  • Latest finish time = smallest latest start time of all immediate successors

Thus project duration = largest early finish time of all activities
activity block


  • Total float = time by which activity can be delayed without affecting project duration: Late start time – Early start time OR 0 if activity is critical
  • Free float = time by which activity can be delayed without affecting project duration or the early start times of subsequent activities: smallest early start time of immediate successors – early finish time

4. A basic example: variables capable of pushing optimality even further

In the example below the main project has been divided into separate activities, each with its specific time and resource requirements. These are plotted to follow the restrictions or dependencies on a timeline. However, because some conditions are weaker than others or because some activities are estimated to take less time than others, some discrepancies arise. These discrepancies give some tasks a certain degree of freedom in terms of start time.

The activities (marked by the green bars) have the possibility to pivot across the free and total float lines. This means that even though a task is completed ahead of time, the whole process can continue only when the conditions are satisfied (i.e. all other required activities are completed). This can be compared to the bottleneck effect, present in most lines of production (factories, etc): “The system moves as fast as the slowest of its components”.

time activity resources

5. What can be done to drive the optimization even further

A way of overcoming this bottleneck effect, or maybe diminishing its impact, would be to stimulate the developers by adding the competition factor or by allowing redistribution of forces. This can be done only in the case when all the participants in the project are aware of the key variables in the equation. For example, say two activities have to finish in order for the process to continue. The developers involved in those activities, knowing that they are the key components in that moment of the project, will work as if the project would be a personal target that they must achieve.

Also, consider the overall project, at a macro-level. Developers, having this kind of information would not only assign the sub-tasks as being personal targets, but will also know exactly to what extent their contribution will be useful in the end. This gives them a sense of belonging that further stimulates them to work towards the final goal. It’s only by doing the small tasks flawlessly that the final result would be optimal.

6. Conclusion

In conclusion, although project management should be addressed to managers in order to perform better and make the project optimal in terms of time and costs, it can also be beneficial to the employees that would contribute even more to that optimality, altering and stimulating the human factor in the process. This can be done only by a proper communication channel between coordinators and executors.

Hack and Slash

My recent experience with a horribly overdue project, which consisted of
probably every software planning fallacy possible, gave me an interesting insight into
getting a substantial part of it back on track. Some statements made in this
blog post will be exclusive to creating web applications, nevertheless
I would like to present it, since the way it relates to the grand scheme of
designing and maintaining large scale software is particularly interesting,
especially when designing components.

The Problem

Vast numbers of software projects still fail (as in go over the time or budget), even though there are various software methodologies that aim to address the problem of intangibility in software projects and increase the chance of success. Furthermore, even applying the currently most hip methodology or anything-driven development we cannot even estimate if that does improve our chances of delivering the project on time within budget, as there is no single silver bullet.

Just Hack (and Slash)!

The decision which allowed the mentioned project to get back on track was to (surprisingly) remove people associated with the project. The mythical man-month stated that increasing the number of people will increase the time and cost associated with project, but claiming the opposite would surely seem preposterous.

Yet, After slashing (figuratively!) around half of the team the project was reevaluated. While keeping the same same functional requirements, the code base was reduced. General and heavyweight components were replaced by smaller ones, allowing to control the behaviour of the application not by tweaking and maintaining big components, but by connecting smaller components in a controlled way. Even though from the outer perspective there is little difference, the effect on the source code was astonishing — the code base was reduced to 20% of the original size.

Large, because it was… big?

Finally, the team started to get things done faster, even though it consisted of a smaller number of people. The code base was smaller, meaning that there was less maintenance required — but it also meant that there were less thing to test, so there was an additional increase of productivity, because implementing new fully tested features required writing even less additional code and allowed focusing more on productive programming (less tests -> lest new tests -> less code). In the aftermath, the original project was suffering under its own size, which was caused by the use of large components, which were required to allow a larger number of people to work on the project.


It is repeatedly stated that there is high percentage of projects failing to be delivered on time and within the budget, even though we — developers — have gained some insight about planning and executing such projects. We realise that certain decisions, like throwing people and money to projects at the latter stage are clearly pointless. However, maybe those relations hold in the opposite way as well. One way to ensure that a large project will not fail would be to limit the size of the project in the first place — agile methodologies address seem to address the problem in a similar way.

Finally, the idea of reducing the project size to prevent a large project from failing by reducing it size seems fairly straightforward and not exactly innovative, yet why the result of doing so felt so surprising?

Behaviour Driven Development: Bridging the gap between stakeholders and designers


It is obvious that when a large system is being developed, no matter the domain it functions within, many things can go wrong. Different applications will focus more on not getting things wrong in different aspects of the system. But the most common reason behind mistakes is the failure to communicate. In a smaller scale, this will can occur among the developer team or between developers and testers. But the larger the project, the more the people involved and the different aspects of successful delivery they need to address. The tree swing metaphor is a very popular and elegant (and perhaps a bit exaggerated) way to demonstrate this. It might be impossible to ever achieve a harmonious communication system across all teams and stakeholders involved, but there is a way to somewhat bridge the gap between the two main actors in a system’s development: the client and the engineering team.

The classic tree swing metaphor

The tester’s Problem: what do I test?

In an ideal world, the process of delivering requirements seems fairly simple and straightforward. Using a generalized Agile framework as an example to simplify discussion, the process is supposed to follow these steps: The client dedicates a team to creating user stories that should cover the product’s requirements and look a little something like this: “As an X, given that Y,  I want to be able to do Z”. The program manager of the development team receives the user stories and creates more specific requirements (use cases) for each of the stories that look like this “Enable user authentication” or “Implement functionality Z”. The program manager will then get together with the developers and testers of the team at the beginning of the sprint and break down the use cases to very specific tasks like “Implement client side authorization cookie”, “write fuzz tests for user authentication” etc.  Each task is assigned to a developer or tester appropriately and then everyone gets down to work. During the sprint duration the devs and testers will mark their tasks as done and at the beginning of the next sprint the process will be repeated depending on completed work so far.  It all sounds nice and easy, right? Well no. One of the first problems that the engineering team will face, is implementing the client’s user stories. Even though large corporate clients will usually have a team dedicated to defining the user stories appropriately, it is extremely common that some aspects will be left out.

The problem is that clients usually think in plain English just like most users will when using the service. Engineers  and testers on the other hand think in code (hopefully). Using a streaming service as a more specific example, what happens when the client writes the user story “as a service subscriber I want to click on the poster for movie X and is launched in the same window”. When the engineering team gets down to creating tasks the following questions can be raised when it comes to testing the story:

  • What if the user has another window open playing movie Y? Should they be able to stream multiple movies at once?
  • What if the user’s subscription does not allow them to watch the movie? Should they be simply shown a “we’re sorry” message? Or should they be guided to a page that will allow them to extend their subscription?
  • If the user clicks “open in new tab” should the browser just launch the video player or the same page with the player embedded to allow deep linking?
  • If the user had started watching the movie earlier closed the browser in the middle, should the movie start over or continue from where it was left off?

Questions like these might of course be covered by different user stories. But very often they’re not. Very often unanswered questions will block development and the program manager will need to get back to the client and wait for an answer. Besides time issues, this can also cause frustration on both sides and  harm the relationship with the client. The problem can also go both ways when the engineering team attempts to explain their testing scenarios and the client does not understand or does not find use in them.

Wouldn’t it be grand if communication of this type could be simplified?

Enter BDD: a middle-ground framework

During my year as a Software Development Engineer in Test at  a  large development team, we worked on a v1.0 streaming service that was partly tailored to the first client’s requirements.  Problems like the one described above were more than common. This was the first time the team was working using agile methodology and releasing a system that would follow the brave new world of continuous delivery instead of the conventional develop -> ship -> support timeline. So when trouble like this appeared, it really set the schedule behind. I would often receive a testing task, write the automation code and then publish it for review. A developer would then tell me I should also test case X which I had left out. I would then publish the 2nd version of my review and a different developer would come along and comment that this is not a requirement and might change soon so I should remove the test. That would lead to a lengthy discussion that would start from the comment thread in the review and then continue off line in the office until it reached a deadlock and had to be raised to the PM. The rest is history. Until one day, a senior tester proposed implementing BDD to our testing framework.

BDD is really just a fancier TDD framework, build to accommodate both the engineering and the client side. The difference is that the automated test methods are wrapped in a simple natural language parser that can read test scenarios written in plain English. In the streaming service example, the client would be asked to write the story in the following format:

Given I am logged in to the service
When I click on a movie
Then the movie will open within the window

The engineering team then builds a framework where the code looks for the Given – When – Then keywords which trigger methods that setup the testing environment as described in Given, run the steps described in When and make assertions that are given in Then.

This description of course doesn’t solve anything since it has the exact same level of ambiguity as the original story. But when the engineering team spots a constraint that should be added to the testing scenarios, they can extend the language definition and give it back to the client. A more specific and correct example of a test scenario would be:

Given I am logged in to the service
and I have access to “all content”
and I am “not” currently streaming another movie
and I had started watching that movie earlier
When I “left-click” on “movie X”
Then the movie will open “within the window”
and the movie will play “from where I had left it off”

The and clauses simply run more setup methods before the test is run and eventually make more assertions one the automation has been finished. The code behind will look something like this:

public class StreaminClientSteps
 private readonly clientPage;

 [Given(@"I am logged in to the service"]
 public void GivenLoggedInUser

 [Given(@"I have access to '(.*)'"]
 public void GivenAvailableContent(string content)

 [Given(@"I am '(.*)' currently streaming another movie")]
 public void GivenCurrentlyStreaming(string streaming)

 [When(@"I '(.*)' on '(.*)'")]
 public void GivenCurrentlyStreaming(string clickType, string movie)
 clientPage.ClickMovie(clickType, movie);

 [Then(@"the move will open '(.*)'")]
 public void AssertMovieOpened(string requestedWindow)

 [Then(@" the movie will play '(.*)'")]
 public void AssertMoviePlayedFrom(string time)

This is just a simplified example, and the definitions in the text might need to be more specific, but it shows how sentences can be reused to test different conditions.

Conclusion: Why it works

“So we spent a massive overhead to build this framework in order to run test methods that we would run anyway and the only difference is that they’re written in plain English. Why the hell did we do that?” a skeptic tester might ask. Since introducing the framework is a lengthy task, it needs to become a user story of its own and take some time from testers or developers until it’s implemented. In the long term though, here are some benefits that this framework will introduce.

  • It reduces testers’ need for initiative. The testers simply have to implement methods every time a new condition, automation or assertion is introduced without wondering what the test will actually do. When an untested occasion appears, they simply suggest implementation and continue with it if accepted.
  • It facilitates the client’s in identifying potential problems. Once they start receiving untested conditions, the clients will get accustomed to writing acceptance tests that will cover ever possible aspect of a situation and then assertions that need to be made.
  • It pushes for better code standards. The conversion from plain English forces the testers to write simple methods that only do one thing; that’s one of the basic criteria of “good code” which is often not followed by testers since their code is not included in the product code.
  • The need for new code is gradually reduced. New conditions will of course keep arising but once the main functionality testing has been implemented, hundreds of test can be quickly made by simply changing values in the Given – When – Then clauses.

And last but not least:

  • It solves the communication problem. Not completely. There will always be some friction between engineers and stakeholders that can originate from many stages of the development cycle. But when it comes to acceptance testing, using a common language removes the need of translation by the engineers and the other way. It makes clear to both sides what needs to be done.


  • http://dannorth.net/introducing-bdd/
  • http://cukes.info/
  • https://github.com/cucumber/gherkin/wiki
  • http://www.specflow.org/

Design Patterns from a Junior Developer perspective


Almost every software developer will be challenged with designing a software system at some point during their professional career. After designing many of them it becomes obvious that despite different problem domains, many of these systems have something in common – it is easy to observe that they have similar structure and objectives and thus can be designed in a similar way.

Suppose you find yourself in a position of leading the implementation of a large project. How would you deal with designing it? Given that there have been countless systems implemented in the past, it is almost certain that somebody has already solved this problem for you (at least partially).

The first time I heard about design patterns was at workshops organized by Google. During my year in industry in a financial company, I was encouraged by senior developers to practice design patterns while designing a small-scale system. So, I bought myself the “Design Patterns Essentials” book and with both huge excitement and “now I am going to be professional” attitude, tried to fit the patterns into every possible design problem encountered. To my disappointment, after some time, I realized that some of the functionalities were so abstracted that no longer easy to understand, contradicting the widely known “make it simple” principle and leaving me baffled with a “I must have done something wrong in here…” thought.

In this post I would like to use the opportunity to share my experience with design patterns and discuss the way a junior developer can learn how to use them appropriately.


What are Design Patterns?


Design Patterns are generic, reusable solutions to the frequently occurring software design problems. They are usually categorized into various groups such as creational, structural or behavioural patterns, making it easier to choose the appropriate one for a particular problem. The most simple example is a singleton pattern that ensures the creation of only a single object of a class, providing a single point of access to it.

Suppose you have to implement a message based, sequential communication system between two nodes. In order to be able to follow the order of messages, each of them need to generate a unique sequence number by adding 1 to the previous one. Assuming that there is a sequence generator class, it is essential to make sure that there is only one object of that class, otherwise there is a risk of having multiple numbering sequences. So, in order to prevent this behaviour the constructor of the sequence generator class can be made private and the instantiation of the single object can be done within a static method of that class, managing the creation and access point to that object.


When to use design patterns?


I think apart from knowing when a particular design pattern can be used, it is crucial to be able to tell whether it should be used at all. It is very hard for junior developers to resist the temptation of using the patterns in every possible situation, ending up with a bunch of complex pattern classes instead of simple, straightforward code. This in fact, is not only a messy code, but the chaos at the structural level – much harder to correct. Even design pattern books, despite their aim to encourage people to use patterns, state, that developers should strive to keep the code simple and not to try to find a pattern to solve a design problem, when a clear and simpler solution can be used. The “You aren’t going to need it” principle of Extreme Programming can be applied to design patterns in the sense, that an additional feature should not be added just for the sake of applying a design pattern. Design Patterns are just another indispensable tool that can be used, but its the developer’s responsibility to use it appropriately.

The perfect example of when the use of pattern might be beneficial is refactoring. It provides an opportunity to analyse the existing code and provide the pattern that improves the structure of the code.

How to learn when to use (appropriate) design patterns?


Most junior developers will most possibly go for “code, code, code, make a mess, break something, feel the pain, fix it” approach. Learning on your own mistakes is great, however not very applicable for large-scale, long-term projects, as the misuse of patterns might cause many problems not only for the developer who used the pattern inappropriately, but for other team members as well. While working on the large-scale project it is best to ask a more senior developer for an advice, before applying the pattern.




In this post I presented my experience with design patterns and described the general problems that might arise while starting to work with them. I think it is very important for junior developers (like myself) to remember that being a professional developer does not mean using fancy tools whenever possible, rather choosing the best tool for the problem, which is called ‘simplicity’ in most cases.





Tony Bevis, “Java Design Pattern Essentials”, Abilityfirst, 2010

Elisabeth Robson, Bert Bates, Kathy Sierra, “Head First Design Patterns”,O’Reilly Media, 2004

Agile Software Development in China

What is Agile Software Development?

Agile Software Development (ASD) is a set of software development methods based on iterative and incremental development [1]. It is designed for addressing the ever-changing requirements from users. Compared with other non-agile methods, ASD emphasizes the cooperation and communication between programming team and specialists of a specific area, frequently updating of new version software, self-organized and small-scale team and coding and organizing methods for adapting requirements change.

The main value of ASD, which was announced by The Agile Manifesto, plays a vital role in Agile development processes, as follow:

  •   Individuals and interactions over Processes and tools
  •   Working software over Comprehensive documentation
  •   Customer collaboration over Contract negotiation
  •   Responding to change over Following a plan

That is, while there is value in the items on the right, we value the items on the left more.[2]


Figure 1. Process of Agile Software Developmenta

Why do we use Agile Software Development?

Meeting the ever-changing requirements of users is a challenge for software developments. Classical waterfall model has acceptable performance during an iteration cycle. In case of amending by requirements, the waterfall model reveals its weakness. Iterations are used in ASD to meet the demands. The purpose of developments during every iteration cycle is to offer an available and deployable system. This system is used to test by users and generate as much as possible feedbacks. In the next iteration cycle, the advices of the exist version and the new requirements are implemented and integrated. In order to handle users’ new requirements and feedbacks, the iteration cycle becomes as short as possible[3] .


There are following advantages of Agile Software Development[4]:\


The waterfall model generally plans a routine from the starter of production to the requirements. With the passage of time and other external factors, users notice that the destination is not they want after reached. Nevertheless Agile model transfers the long –term running into many small rush, which means the destination and mythologies are modified with the change of demands.


Because of heavy workload caused by much iteration, the quality of ASD is doubted. Agile Model programming requires high quality on each iteration process. Some agile methods, such as extreme programming, use test-driven development, which means implementing testing code before function code, to ensure the quality of development.


The teams of ASD only focus on most necessary and valuable part at present. Individuals can soon concentrate on development. As consequence, the speed of the process will be increased. More additionally, owing to short and frequent iteration cycle, team member can immerse in a work state rapidly.

4.Higher Investment Return

During ASD, the most treasured parts will be implemented and developed preferentially. Clients will gain higher investment return.

5.Efficient self-organized team

Every team member should be proactive and self-organized. Working in this kind of team enriches the experience of individuals. It also enhances technique, communication, social, expression and leading skills.


Agile Software Development in China

Chinese software development groups start using ASD and its methods in recent year. Many companies and organizations do not use ASD, while many multinational companies adopt agile development such as IBM,SUN. Meanwhile, some cutting-edge companies especially some Internet companies like Tencent spread ASD. Overall, agile software development methods still remain in the early adopters stage in China. The reasons are variety as following.


Firstly, there are lots of software companies in China, but few of them are large-scale. Non large-scale companies do not implement standard development methods; instead, they arrange development processes and deal with the problem according to personal style of the leader. This pragmatism has no cost on management and offers more freedom for team members, but disadvantages are also apparent. First, the quality of producing and efficiency is unstable. Second, it is tough to generate a cohesive and self-organized team. Organizing capacity of companies relies on core person/leader and stability. Otherwise the whole team needs to adapt other developing style. Third, pragmatism methods lack of mechanism of self-improvement. Nevertheless, those teams are faced with less underlying hinder than other teams because they have no patterned model. Adapting ASD improves the efficiency and stability on quality without heavy changes for them.


Secondly, after the financial crisis, many companies reevaluate daily expense and running efficiency. Limited human resources become the bottleneck of the development of companies. Under this situation, it is essential of them to enhance the utilization of existing resource. Moreover, China has suffered a rapid expansion of infrastructure and facilities in recent year, while corresponding management and services did not support the hardware. Agile Software Development provides the improvement of quality and efficiency on software and its services, which are vital for Chinese software companies.


Thirdly, in terms of Chinese culture, some elements are appropriate for ASD. The Agile Manifesto puts the cooperation of individuals before tools at the start, which is similar to the general principle of Chinese culture as individuals play an essential role in the organizations instead of processes and tools, especially in companies and government agency. The positivity and creativity of the team will be stimulated if this element is discovered and applied [5] . In addition, ASD required a concise and service-based management layer. In China, it is not professional enough for management in projects and companies. It is simpler to transfer to ASD from a Chinese development teams. Finally, Some Chinese development teams prefer the cowboy coding, which is a term refers to go-as-you-please and unplanned coding style. This style is closer to ASD style compared with the waterfall model or other professional coding style. Overall, the special elements in Chinese culture and features of the software industry provide natural factors for development and implementation of ASD.


In conclusion, although ASD is not popular enough in China, it will drive the improvement of quality and efficiency of software development if the Chinese companies concentrate on implementation, localization, and creation of ASD. We have reasons to believe that ASD will receive a fast progress.



[1] Beck, K., Beedle, M., Van Bennekum, A., Cockburn, A., Cunningham, W., Fowler, M., … & Thomas, D. (2001). Manifesto for agile software development.

[2] Beck, K., Beedle, M., Van Bennekum, A., Cockburn, A., Cunningham, W., Fowler, M., & Thomas, D. (2001). Retrieved Feb 13th, 2014, from http://agilemanifesto.org/principles.html

[3] Aniyo Zhang (2009),敏捷开发有什么好处,Retrieved Feb 13th, 2014, from http://aniyo.iteye.com/blog/1567668

[4] Shore, J. (2007). The art of agile development. ” O’Reilly Media, Inc.”.

[5] Ceschi, M., Sillitti, A., Succi, G., & De Panfilis, S. (2005). Project management in plan-based and agile companies. Software, IEEE, 22(3), 21-27.

The Growing Divide in the Patterns World: A Response

The article[1] was published in the July/August 2007 issue of IEEE Software. It discusses implications of a public survey conducted by Microsoft’s Patterns & Practices group in 2006, and pertains to the use of software patterns by software developers/architects outside the community of pattern experts.

At the core of the article is a perceived widening gap between the general and expert patterns communities – the perception stems from the results of the survey. They were surprised by this, and worry that it may become unbridgeable.

But is the gap really widening, and if so, is this really so surprising?

The authors argue that in recent years, software patterns have increasingly been found for, and applied to, diverse domains (examples are telecommunications systems, and agile management), but that general practitioners equate patterns with a particular type: design patterns.

My first issue with the article is that the results they publish do not really seem to me to indicate a WIDENING gap. By their own statistics, pattern use has INCREASED in the ‘casual user’ community in the five years previous to the survey. Even if they restrict themselves to design patterns, this appears to still indicate a narrowing rather than a widening of the gap.

Arguing that a critical mass must be reached before the communication benefit of patterns is really seen within an organization, they point out that 68% of respondents believed no more than half of their colleagues used software patterns. The figure they use to illustrate this, however, suggests that around 20% of them thought that ‘about half’ used them. So you could equally make the point that around 55% of respondents estimated that around half OR MORE used them. Furthermore, they don’t say what the critical mass is, and so it is not clear that it has not, in fact, already been reached.

Screen Shot 2014-02-14 at 14.36.07


They find that – typically – a pattern user (not a patterns expert) onlyuses a tiny fragment of the patterns that are available, and assert that patterns that were not contained in the seminal ‘Gang of Four’ Design Patterns book are not well known, nor in widespread use. Again, this is precisely what I would expect. The book covered many Object Orientated Design (OOD) patterns, and with the high prevalence, if not dominance, of OOD since its publication, it seems reasonable that these are precisely the patterns that are in frequent use. There is a reason the book is considered seminal, after all, as well as a reason for the original choice of these patterns.

The authors list an increase in the number of people writing their own patterns but then emphasize that fewer than half of respondents did so. Yet only five years ago, they say, ‘only a small fragment’ were doing so. Additionally, 70% of them thought that they would be writing their own patterns in future. They go on to argue that there is low adoption, but I don’t think the results that they publish support this stance. They do however make a case for low adoption of patterns that are not contained in Design Patterns.

Screen Shot 2014-02-14 at 14.36.45

 A further argument that the authors make is that their own initial thrill in seeing that people are finding and writing their own patterns was misguided. They state that people who do so get a return from it. But then they state that they believe, as a result of this survey – and the evident confusion of the survey group about patterns – that it is not really patterns, but one-off solutions that they are making, but in a pattern-like manner. Surely, if that were the case though, there would be no benefit and no return on the investment.

One issue that I do find well discussed in the article is that a large percentage of non-expert pattern users see patterns as templates for code generation. 61% of respondents took this view. The view of patterns as a means of communication or for visualization is far less common. Indeed, they find that 58% of this group believe that future growth in the area will be directed towards development tools, and not towards written publication of new patterns.

This is an issue I’ve recently come across when learning iOS programming. Building starter applications from a book in order to learn basic principles, I was repeatedly instructed to build ‘empty’ applications, or remove ‘boiler-plate code’ – iOS makes heavy use of (the modern variant, as discussed in lectures) the Model View Controller pattern. The authors took care to point out that in order to learn the pattern, as well as its applicability, it was important not to just let the development environment do the work. I am surprised by this finding, as I had assumed that people did indeed use patterns as a tool for understanding, rather than merely saving typing.

I understand that the authors are have concerns about the expert community’s capability to disseminate information about available patterns. There clearly are difficulties here: as they point out, there is no authoritative site that can be accessed to learn about the multitude of patterns that exist. That it is not easy to search for a pattern (or pattern sequence) and find something that will be helpful for any particular situation. It’s not that I disagree with what they are saying, but just that I feel they are painting an overly negative picture, giving the statistics they have presented.

I also understand that it is a call to arms, to make it simpler and easier for non-experts to access information that will allow a wider community to use a more diverse range of patterns. I just wish their argument was more convincing!

It is clear from the article that there is a wide gap. They ask what is the reason behind this dichotomy between the patterns expert community, and general patterns user community. This to me seems obvious. It is the simple fact that one is a group of specialists, who would be expected to have knowledge of, and utilize, a wide range of patterns. The other is a group of people who cannot devote the same amount of time to the area, but apparently understand the importance of patterns. The fact that there is ‘significant’ adoption of patterns illustrates this.


[1] Manolescu, Dragos, Wojtek Kozaczynski, Ade Miller, and Jason Hogg. “The growing divide in the patterns world.” Software, IEEE 24, no. 4 (2007): 61-67.

[2]Gamma, Erich, Richard Helm, Ralph Johnson, and John Vlissides. Design patterns: elements of reusable object-oriented software. Pearson Education, 1994.




The Long Road to Continuous Delivery


All software project managers and their pets will tell you that the future of software development lies in Continuous Delivery. However, the uptake rates of Continuous Delivery and the accompanying ‘DevOps’ culture is still less than a majority in large-scale firms (a recent survey by Perforce found that only 28% of US and UK firms regularly use automated delivery systems across all their projects)[5]. Why is such a useful concept not already common place? By focusing on the prize the end of the road, are we ignoring some of the barriers along the way?

Where are we going?

Continuous Delivery (CD) is often confused with Continuous Integration, Continuous Deployment and ‘DevOps’, and is definitely in need of some clarification. Continuous Delivery is a software development methodology that attempts to automate the build, test and deploy stages of the production pipeline. As opposed to Continuous Integration, CD also incorporates automated testing, especially acceptance tests to test business logic. Continuous Deployment on the other hand is an extension of CD, which takes the software and automatically releases it directly to users, which may not always be a practical option for enterprises [1].

‘DevOps’, on the other hand, is not a methodology or process. Rather it is a philosophy forged between the interaction of development and operations teams. It spawned from a yearning to avoid the long delays in delivery and software maintenance through more collaboration and knowledge-sharing between the two departments. Although only recently gaining popularity, the essential principles of DevOps has existed internally in many organizations, simply as a response to business demand and competitive pressure. In this regard, CD embodies the DevOps philosophy and in turn DevOps is an impetus to adopt CD.

The benefits of Continuous Delivery should be clear to most readers. By releasing features and bug-fixes to a production-like staging environment rapidly and reliably, developers can get immediate feedback on the readiness of a release candidate. Also, automated integration keeps change sets relatively small, so when problems do occur, they tend to be less complex and easier to troubleshoot.  At the business level, CD ensures that each updated version of the program is a potential new release. However, despite its myriad of benefits, CD still faces numerous barriers to implementation.

It’s in the way you walk

One of the main barriers to CD is the culture of an organization. This is highlighted in the misalignment of incentives between development and operations teams.

Developers are measured for performance against the quality and quantity of the software features provides to its users. On the other hand, operations teams are concerned with the stability of the system after it has been delivered to its users. This creates a tug-of-war between the dev. team having little incentive to ensure stability, or even ease the workload for operations (lack of clear documentation is an example) and likewise ops teams care little for the frequency of new releases.  I believe, this disparity stems right from the level of business-IT interaction (42% of business leaders view their IT department as order-takers rather than partners in innovation [5]), and filter down through specialized requirements enforced through reporting structures and hierarchy within an organization. The only solution is to view the software project holistically as employing a single ‘delivery team’ rather than a string of component teams. This encourages team members to care about the effects of their work on the final release candidate, even if it was traditionally outside their ‘job description’.

Duvall [3] describes some effective methods such as training multi-skilled team members (e.g. experience with infrastructure and databases) and cross-functional teams. Amazon, for example, took this idea to heart in their “you build it, you run it” idea, trusting and encouraging developers to take the software all the way to production [9]. Then again, Rob England’s damning review [4] of ‘DevOps’ claims that splitting developers based on expertise is backed by the sound principle of economics of scale, and ‘DevOps’ implicitly assumes an unrealistic level of proficiency and enthusiasm in all employees to collaborate without supervision.

Two feet, two different shoes

The second factor in slowing down progress to continuous delivery is the diversity of tools and processes spawning in the pipeline, the lack of integration between these tools and a lack of maturity in their usage.

Take the typical tool set used to implement a software pipeline: Jenkins is used to build and deploy against an environment, with Maven performing the build stage and Capistrano performing the deployment, so each environment is individually configured by different developers. When the software is delivered to operations, weeks later, AnthillPro is used to run the deployment requiring further manual configuration changes [3]. These overlapping configurations often lead to problems in one environment that cannot be mimicked in the other, late discovery of defects and horrendous confusion in finding the cause of the problem.

Businesses quoted two of the top five challenges to implementing CD as ‘integrating automation technologies’ and not having the skilled people and correct platforms in place [2]. 20% of development teams lacked maturity even in Source Code Management [5] and developers stressed that “tools were sometimes not mature enough for the task required of them”[6]. It seems that in some cases, adopting an automated system simply replaces frustration over bug-fixes with frustration over getting the modules of the automation system to work together.

To overcome a part of this problem organizations need a defined platform strategy and architecture. It will need to make hard decisions on which tools to master and which ones to abandon, but of course this is sure to spark dispute amongst teams in the company.

The brick wall

For enterprises attempting to sprint down the road of CD, most run immediately into a wall; a large, monolithic enterprise system that is tightly coupled and heavily customized. Defining an automated process that copes with this complexity is a serious challenge, which most managers shy away from.

This is particularly evident when relational databases need to be incorporated to the CD process. Once the first version has been in use, the coded assets (e.g. stored procedures), the domain data (seed data that supports an application) and transactional data (business records) all become integral to the system. The database schema is difficult to change without the risk of losing this valuable data. Although keeping SQL scripts under source code control aid in automating database configuration, it misses the issue of migrating data from existing systems and getting it to remain consistent. On the other hand, CD demands production-like environments for testing which results in developers running a personal, isolated database. Despite the existence of tools to handle this problem (e.g. RedGates’ SQL Compare), none have become mature enough for wide spread popularity, and keeping track of all these databases becomes a steep task [11].

Waiting for the Green Light

Continuous Delivery and the ‘DevOps’ movement has also been heavily criticized on its apparent inability to scale with the number of developers and the size of the project. The main problems surround continuous integration and testing.

As the dimensions of the project increase the code-base expands and commit frequency increases. As the individual version of the project take longer to compile, test, deploy and deliver feedback, it creates a ‘bottleneck’ in the pipeline. As the team is forced to wait for broken build fixes in order to commit their own versions, the skill of the individual developer impacts on the performance of the team as a whole [8]. Once the build takes more than ten or fifteen minutes, developers stop paying attention to feedback and may be incentivized to branch and merge later, which undermines the very principle of continuous delivery.

As Wheeler [12] suggests, modularization of the code-base could be a solution to the problem, whereby different teams commit to different mainlines of independent components. Unfortunately, the extent to which you can modularize a project without overlaps depends on the project itself. Moreover, modularization brings its own set of challenges such as building interfaces and coordinating between teams, and with different modules advancing at a different pace, there tends to be a heavy reliance on integration testing.

This brings us to another pain point in CD; managing the speed of automated testing against its coverage.  Automated tests are only as good as the test-cases that underlie them and may give a false sense of security over build quality. Getting a wider coverage from unit and integration tests is good advice but as the number of tests multiply, so does the length of the delivery time. For example, integration tests and acceptance tests that touch the database or require communication between modules are vital to ensure the system works as a whole and ensure business value, but could take hours to complete in a full testing suite [11].

This issue can be handled somewhat with an architectural solution, using parallelization and throwing more resources into the mix (e.g. having a dedicated machine for testing), but again this takes a larger initial investment of time and money.  Other suggestion such as load partitioning, and parallel builds are a better solution, but again take time and expertise to develop and a whole host of more tools to juggle.


Despite the potential benefits offered by changing the pipeline process towards Continuous Delivery, there are certainly many painful barriers to overcome. Organizational issues and culture are the first of these to surpass with better management practices, a change in philosophy and a clear vision of what to achieve. Issues with tool integration and efficient code-integration and testing suites should be approached both at a management and technical level. However, there are signs that as more software-as-a-service provides develop under pressure from the CD movement, there will be less friction from the tool and infrastructure that support Continuous Delivery.

Perhaps it is incorrect to view Continuous delivery as an end goal in a long road, rather what’s important is the journey toward CD. As Jez Humble [7] puts it “Given your current situation, where does it hurt the most? Fix this problem guided by the principles of continuous delivery. Lather, rinse, repeat. If you have the bigger picture in mind, every step you take towards this goal yields significant benefits.”




[1] Caum C. (2013). Continuous Delivery vs Continuous Deployment: What’s the Diff. Accessed on 11/2/2014 at  < http://puppetlabs.com/blog/ >

[2] DevOpsGuys (2013). Continuous Delivery Adoption Barriers. Accessed at 3/2/2014 on <http://blog.devopsguys.com/ >

[3] Duvall P. (2012). Breaking down Barriers and reducing cycle times with devops and continuous delivery. Gigaom Pro. Accessed on 11/2/2014 at < www.stelligent.com/blog >

[4] England R. (2011). Why DevOps won’t change the world any time soon. Accessed on 10/2/2014 at < http://www.itskeptic.org >

[5] Evans Research Survey of Software Development Professionals (2014), “Continuous Delivery: The new normal for software development“, Perforce, accessed on 7/2/2014 < http://www.perforce.com/continuous-delivery-report>

[6] Forrester Consulting (2013). Continuous Delivery: A maturity assessment model. Thoughtworks

[7] Humble. J (2013). Continuos Delivery. Accessed on 11/2/2014 (video) at <http://www.youtube.com/watch?v=skLJuksCRTw>

[8] Magennis T. (2007). Continuous Integration and Automated Builds at Enterprise Scale. Accessed on 9/2/2014 at< http://blog.aspiring-technology.com/ >

[9] Mccarty B. (2011). Amazon is a technology company. We just happen to do retail.  Accessed at 10/2/2014 at <http://thenextweb.com/>

[10] Pais M. (2012). Is the Enterprise ready for DevOps. Accessed on 9/2/2014 at <http://www.infoq.com/articles/>

[11] Viewtier Systems (2012). Addressing performance problems of continuous integration. <accessed on 13/2/2014 at < http://www.viewtier.com>

[12] Wheeler W. (2012). Large-Scale continuous integration requires code modularity. Accessed on 5/2/2014 at < http://zkybase.org/blog >

A World Full Of Patterns


We live in a world that is full of patterns. And what is a pattern anyway? In general a pattern can be described as an arrangement of repeated or corresponding parts. If we are talking about a design pattern is software, then it is a reusable solution to a commonly occurring problem [1], a template of some sorts. So why do people try to find patterns anyway? How are they useful? In the most general case we strive to find patterns in order to get rid of the chaos that surrounds us or maybe when we see a pattern it helps us understand the inner workings of the universe. Who knows? Who cares why we are looking for them? The thing is that patterns are generally a good way of understanding how things are constructed. If you think about it we are an example of nature’s patterns too. Every person is different yes but nevertheless our specie have key features, such as limbs; organs etc..,that categorizes us as being human. And the main reason why we want to see patterns is that once you see that they are not random and when something is not random and there is some repetition within it, some sort of pattern that keeps reoccurring, you understand it better and maybe you can use that understanding in recreating it or using it as a basis for creating something even better.

In this article I will not talk about patterns in that general sense though. Instead I will try to explain their importance in the context of software projects and how they can be helpful for a successful project.

Algorithmic Patterns:

First I will talk about algorithmic patterns. We all have come across an algorithm or two in our time either programming an algorithm or implementing it in a real-world scenario. It is basically a step by step execution of simpler tasks that achieve a harder goal in the end. If the algorithm is well known to your colleagues then there would not be any need of explaining it to them in details and one of the most time consuming tasks when working in a team is to explaining why and how your code works. If it is algorithm known by most then this section of the project can be understood and maintained by others. And in a large-scale system the more people understand how everything works the better.

Programming Patterns:

It has been a long time since the first computers has been invented and since then so many things that were previously done by hand are now automatic. And why would it not be? The whole idea of technologies is to make our lives easier. The more we come across the same problem the more familiar we are with its solution (if we have found it). So the whole idea is if you find something that is recurring more than twice why not implement it a single time as a function/algorithm and then use that when you come across it again and avoid solving it all over again. In small and large-scale systems alike by avoiding repetition the outcome is that the system that has been developed becomes easier to understand hence easier to be maintained, extended and managed.

Patterns in programming can be the whole approach of reaching the solution of a problem. These are design patterns that are basically a way several programmers can communicate on a similar language. Since everybody has its own way of doing things it is often hard to write code in the same style. This is due to different backgrounds or way of thinking in general.  In large-scale systems since the work can’t be done by a single person and most parts are connected to each other there is the issue of being able to connect them properly. This requires a lot of communication and effort. One way of reducing the amount of effort is to implement the different parts of the system in a way that is easier to understand by most people involved. One way to do that is to use design patterns that are well known by the different teams. In other words design patterns become some sort of a shared vocabulary. Such patterns can either be developed or agreed upon by all of the connected teams or be so famous that everybody would know how to use them. In both cases by using design patterns the documentation of the system and the communication between different teams and team members become much easier.

Architectural Patterns:

These are patterns that occur at higher level than design patterns and are basically the way a large system can be organized either by being divided into smaller chunks or be made in an adaptive way. There are many examples of such patterns such as Layered Abstractions, Pipes and Filters, Blackboard, Model-View-Controller, Presentation-Abstraction-Control, Mircokernel, SOA, etc. I will not go into detail explaining every one of them. However I will mention the advantages and disadvantages of implementing such architectural patterns in a large-scale system. The decision a software architect or even an architect makes at the beginning about how the whole system/building would be structured has a huge impact on the project. Every single choice has consequences and every structure that is chosen has advantages and disadvantages. The key is to choose the best that fits the need. By using an already existing pattern as a foundation to your project can help you see issues that you have not considered. Since most of these patterns’ issues are known if you implement one of them you would know what to expect and if the disadvantages do not matter in your case then the architecture fits good to your problem. Of course the idea is to implement an architecture that is tailored to your specific problem but if there are similar solutions and similar architectures why not use them? There is no need to reinvent the wheel. In most cases it is much easier to extend an already existing architecture and modify it according to your needs than start from scratch. Another advantage is that when using an architectural pattern then documentation would be a lot easier since the pattern already has documentation. And since most people leave documentation for a later stage some of the choices made might be forgotten [9] thus leading to incomplete documentation. It is well known that documentation in large-scale systems is very important. It can be used as a manual to a user, or a way of communication between different teams within the project. It does not matter the purpose of it the thing that matters is that it is more useful to be complete than incomplete with some key things omitted. So one way to tackle the problem of documentation is to use an existing architectural pattern and thus reduce the chance of omitting to document some decision. The disadvantages from architectural patterns differ for each and as long as these architectures can be tailored to reduce them they are worth being used as a base to your architecture.


Patterns exist everywhere in the real-world and in software alike. They provide a common ground which can be used for communication between different people or can be used to gain better understanding of different problems. Either way they provide great aid in developing, maintaining and managing large-scale projects that involve a the need of communication, documentation and overall good management.



[1] Wikipedia entry on Design Patterns

[2] Avoiding Repetition M. Fowler. IEEE Software, 18(1), 2001.

[3] Is This a Pattern? T. Winn, P. Calder. IEEE Software, 19(1):59-66, January/February, 2002.

[4] Past, Present, and Future Trends in Software Patterns. F. Buschmann, K. Henney, D.C. Schmidt. IEEE Software, 24(4):31-37, July, 2007.

[5] Design Patterns: Abstraction and Reuse of Object-Oriented Design. Erich Gamma, Richard Helm, Ralph E. Johnson, and John M. Vlissides. 1993. In Proceedings of the 7th European Conference on Object-Oriented Programming (ECOOP ’93), Oscar Nierstrasz (Ed.). Springer-Verlag, London, UK, UK, 406-431.

[6] The Growing Divide in the Patterns World. D. Manolescu, W. Kozaczynski, A. Miller, J. Hogg. IEEE Software, 24(4):61-67, July 2007.

[7] The pros and cons of adopting and applying design patterns in the real world. M.P. Cline. Communications of the ACM, 39(10):47-49, October, 1996.

[8] Design Patterns: Elements of Reusable Object-Oriented Software Gamma, Helm, Johnson, and Vlissides 1995, Addison-Wesley, ISBN 020163361X.

[9] Using Patterns to Capture Architectural Decisions. N.B. Harrison, P. Avgeriou, U. Zdun. IEEE Software, July/August 2007.

[10] Using Architectural Patterns and Blueprints for Service-Oriented Architecture. M. Stal. IEEE Software, March/April 2006.

Pair Programming: Will It Make You Hate Your Colleagues?



I recently interviewed for a company who believed in pair programming one hundred percent of the time on all projects. It got me thinking about how pair programming, on this scale, would impact me as a developer and whether it would actually be effective and useful. This blog post will explore and analyse the benefits and costs of pair programming one hundred percent of the time as a software development technique.

The Concept

Pair programming is an agile software development technique where two developers work together at the same computer. The developer who actively implements the program is known as the driver, while the other is the observer. The observer’s role is to continuously monitor the work of the driver to identify syntactic errors, spelling mistakes etc. The observer is also responsible for steering the design of the project in the right direction. Both developers switch roles frequently.

There are three possible pairing permutation when pair programming. Novice-novice, novice-expert and expert-expert.

The novice-novice pairing variation has shown an increase in productivity in comparison to developers working on their own [3]. The novice-expert variation makes for a great training tool to introduce a new hire to a development language or framework that they are unfamiliar with [5]. The expert-expert is a powerful variation and can lead to a boost in productivity.

However with each of these variations,  studies and developers [5] have both shown that pair programming is best for solving new, previously unseen problems. This is indicative that pair programming one hundred percent of the time may not be appropriate.

Reaping the Benefits

The aim of pair programming is to improve code quality and efficiency while simultaneously providing an outlet for developers to learn from each other.

Studies have shown that even pair programming by two novices is more productive than a solo developer coding on their own [3]. There are also definitive results that clearly show that code quality improves when pair programming is used. It has been shown to reduce defects by up to 15% [1]. Having the code the driver is writing be constantly reviewed by the observer leads, not only, to more errors being caught immediately. It is also more efficient to constantly review the code than to only review upon completion.

One of the nicest benefits of pair programming is team building.  It encourages better communication and collaboration. Programming is often depicted as a solitary task and developers as shy, quiet people. Pair programming, however, forces developers to actively work together which can lead to more productive teams, not just individuals . Most of the studies referenced below, point this out [1][2][3].

However, one key factor that is evident from all these studies is that they don’t seem to have considered what pair programming is like if it is being carried out one hundred percent of the time, all day, everyday. While it is true that developers don’t code all the day, there are a number of scenarios where pair programming one hundred percent of the time may not be the best course of action.

100% Pair Programming: Yea or Nay

While empirical studies prove that pair programming is one of the best things to happen in software development, I am not convinced by them. The one factor that they do not seem to take into account a great deal is  the ‘human’ factor.

“In the name of (short term) productivity” – Mark Needham [5]

The implementation phase of a software project is not all just about overcoming challenges. All projects have mundane and often trivial code that needs to be implemented to carry out the most basic of tasks. The problem is that the observer will just have to observe this dull, trivial  process without being able to contribute anything useful. Their skills could have been put to better use in implementing a critical piece of code, or another piece of trivial code which had yet to be implemented. The approach of pairing one hundred percent of the time can lead to a waste of a developers time and talent. This could slow down the development as there may be many other pieces of code yet to be implemented which could be implemented by the observer. This argument still holds when the developers switch roles.

Pairing all the time is also a terrible way for a developer to learn to use any development tools. Watching the driver make use of various tools, or watching them figure out how to use these tools is not educational. Some tasks need to be completed by the learner themselves to be able to fully understand them. This is similar to a person claiming they can play basketball because they have watched a few games on TV.

In addition to this, there is a time for individual work unhampered by the need when mundane tasks.

“You don’t want to waste their time. You don’t want to argue (unless the other person wants to as well). You give in more often than if you were working alone.” – Mark Wilden [6]

Pair programming depends on the abilities of two developers to successfully collaborate and work togethe. The studies below all assume that people will get along with each other the entire time. Perhaps this should be the case in a work environment as professionals are expected to put aside their feelings, but realistically that this is simply not the case. This is not implying that they may resent each other (they might), but they may disagree a lot when it comes minor design decisions. This may slow down the pace of implementation and in some cases lead to one developer believing that their opinion is not correct or worth taking into account. This can be quite demoralising, leading to feelings of resentment. This could be prevented if pair programming weren’t being carried out constantly and only major design decisions were discussed, leaving the minor decision to the coder’s discretion.

“Pair programming doesn’t encourage quiet reflection and exploration.” – Mark Wilden [6]

As a developer, getting a feel for the language and codebase is quite important when it comes to implementing designs. Just as children learn by trial and error, developers learn by exploring the code. Many innovative ideas can also stem from exploring the codebase; refactoring certain classes for example. However, if a developer never gets enough time to work their way through the code or language then they can never truly get comfortable with it.

The Bottom Line

Like Wilden and Needham, I think pair programming should be used in moderation. In my own limited experience as a developer, I faced days where my productivity was low. It could be argued that on these days pair programming all day would have helped this, but I don’t think it would have. Almost everyone who has has some work experience can relate to days where your brain does not seem to want to function properly. The mind doesn’t think straight; logic, which was always like second nature is baffling and even words escape you. On these days a developer just wants to get through the day. These unproductive days have always been compensated with extremely productive, slightly longer, working days. However if forced to pair program all day on such days then I’d have to say that it most likely would not have resulted in feelings of hatred, however it may have led me to associate negative feelings with programming with my pairing partner. This is not a failing on the concept of pair programming alone, in fact pair programming in small increments may help, but on human nature.

Pair programming is incredibly beneficial and should be used in moderation especially when used to exchange knowledge. It can make a great coaching tool and when used to solve a new, unseen problem it can help teams work together to design innovative solutions .  However pairing one hundred percent of the time should be approached with caution to ensure that colleagues do not develop mutual feelings of annoyance towards one another.


[1] D. Winkler, M.Kitzler, C.Steindk, S.Biffl (2013) ‘Investigating the Impact of Experience and Solo/Pair Programming on Coding Efficiency: Results and Experiences from Coding Contests’, Agile Processes in Software Engineering and Extreme Programming [Online]. Available at:http://download.springer.com/static/pdf/742/chp%253A10.1007%252F978-3-642-38314-4_8.pdf?auth66=1392496570_5aef1fdad11e1c3331f4cd4351cbf951&ext=.pdf.

[2] Erik Arisholm, Member, IEEE, Hans Gallis, Tore Dyba˚, Member, IEEE Computer Society, and Dag I.K. Sjøberg, Member, IEEE (2007) ‘Evaluating Pair Programming with Respect to System Complexity and Programmer Expertise’, IEEE TRANSACTIONS ON SOFTWARE ENGINEERING, 33(2), [Online]. Available at:http://simula.no/research/se/publications/Arisholm.2006.2/simula_pdf_file.

[3] Kim Man Lui, Keith C.C. Chan (2006) ‘Pair programming productivity: Novice–novice vs. expert–expert’, [Online]. Available at:http://www.cs.utexas.edu/users/mckinley/305j/pair-hcs-2006.pdf.

[4] Alistair Cockburn, Laurie Williams (n.d.) ‘The Costs and Benefits of Pair Programming’,[Online]. Available at:http://collaboration.csc.ncsu.edu/laurie/Papers/XPSardinia.PDF.

[5] Mark Needham (2011) Pair Programming: The disadvantages of 100% pairing, Available at: http://www.markhneedham.com/blog/2011/09/06/pair-programming-the-disadvantages-of-100-pairing/ .

[6] Mark Wilden (2009) Why I Don’t Like Pair Programming (and Why I Left Pivotal),Available at: http://mwilden.blogspot.co.uk/2009/11/why-i-dont-like-pair-programming-and.html.


Accepting Agile: Too Many Variables, No Single Formula Exists

When taught, we are given the impression that agile and plan-driven software development methodologies contradict each other completely. However, to my mind, this is not true and the solution is to find a healthy balance of the two. This idea is conveyed in the article “Management Challenges to Implementing Agile Processes in Traditional Development Organizations”[1] by B.Boehm & R. Turner. It argues that it is difficult to integrate the agile methodologies within companies with large legacy systems. Also, the authors present many reasons and suggest possible solutions.  While agreeing with the article’s claim, I was surprised, by the wide range of challenges that need to be overcome, as you are forced to look at the big picture of the whole company. I would like to add my own observations from the industry and highlight some of the reasons why I think there is no one clear approach in integrating agile methodologies within traditional companies, as there too many different possibilities.

Restructuring: Starting from Scratch Not an Option

Agile software development principles are rather recent and innovative when being compared to other methodologies such as the text-book example Waterfall and others. Hence, many significant software projects of successful firms have foundations that are directly incompatible with agile software development, since they are difficult to refactor and unfriendly to the short development cycle of an iteration. However, this is an obvious incompatibility that needs to be handled, the two software development models are different.

A less expected factor that influences the use of agile is the size of the team and the scale of the project. Agile is known to work on small projects, with a team of a few people working to solve problems, where as the legacy systems tend to be large with a larger set of people working on it with different responsibilities. Within agile, close collaboration is needed within team members, hence the tradition of scrum meetings. This may be difficult with large teams and may take more time than expected. Also, it may cause various logistical issues within a corporation if the teams are split across countries[2]. Hence, some sort of re-organization is needed that should be tailored to the company’s needs.

As I worked in a team of a larger company that is trying to take up agile with a spin off project of slightly smaller size, the size of the team was still quite large. The team had representatives in three different time zones, however after making working hour adjustments, scrum meetings could be organized daily for the whole team to attend. Having said this, not every company can afford international calls or accustomed working hours, which once again leads to no clear one solution.

Planning: Always Expect The Unexpected

Plan-driven development is based on the idea, that the original estimations and requirements will not undergo great changes. Which is used in larger companies, as in this way it is easier to plan the business model. Unfortunately, change is often unavoidable. The agile principles try to accept these changes by developing in short iterations, without any large future vision in order to adapt to short notice changes. This is a very controversial topic, as agile principles may lead to the success of the project due to catering to the latest requirements. However, to do so in larger firms a plan has to be made in order to estimate scope, cost and length[3]. Also, multiple teams work in related products, and constant change would slow down the development cycle. A possible approach would be to plan ahead and make estimations, set a goal, and then organize sprint planning as well as development iterations in order to make the original aim come true with adjustments need to be made.

Referring back to my experience in a larger firm, one simply cannot anticipate everything from the beginning. For instance as, I was working on my project for which I had set myself clear goals on a strict timeline, I found out that there was a security issue if I took the original approach. This happened because the systems are large and different parts have different specifics. Obviously such an issue had to be addressed straight away and supplementary modules needed to be built, which took time and changed the timeline.

Unfortunately, facilitating the acceptance of constant change is difficult because it is difficult to tell the difference between a reasonable and timely change and the time and cost needed to implement the alternative solution. Hence, the call has to be made on project or even change basis.

Developers: Old Habits Die Hard

An astonishing discovery was the difficulty of making sure that software engineers understand and are willing to undergo changes. Traditionalist developers favour a more plan oriented approach due to the belief that it is more predictable and safer, while the new younger developers are willing to avoid the “crushing weight of rushing bureaucracy”[3] and accept the fast pace change in technology. It is evident that there exists a difference in generations of developers. This is because software is being built by programmers and their valuable experience with other methodologies cannot be ignored. People may find it difficult to adapt or may be stubborn and not want to accept the changes.

Agile requires a change in team dynamics and management. As in agile teams engineers tend to be “multi-taskers” rather than have a certain role, which is what many are accustomed to. In this case  merging the two ways of thinking may be difficult. The project manager has to be willing to accept that within a more agile team developers are more flexible and share tasks with respect to urgency and availability. It is their responsibility to face the Human Resources Management when it comes to assigning “roles”[1].

In the team I worked with everyone was willing to pursue the changes, but the concept of “roles” was still evident. To minimize that regular deep dive sessions are organized, where employees share their knowledge. It could, be seen that every step was made to accommodate agile within the traditional methods. However, this is only a solution, as there are too many variables to guarantee success of such an approach indifferent environments.

Stakeholders: Raising the bar

The relationship with stake-holders also needs to be taken into account, which may seem of less significance in the first place. In larger firms, that are used to plan-driven development, the requirements are agreed with stakeholders at the beginning. However, agile raises the bar: stakeholders have to be much more involved and willing to collaborate. This is challenging for both as stakeholders have to become available, even if they have other priorities, while developers have to be ready to receive feedback and may need to implement short notice changes[4].

This is handled differently by corporations, as the stakeholders differ. I was lucky to work on an internal product, for which in order to get feedback from a stake holder I could simply pick up the phone and make a call. However, once again this may not always be possible and other arrangements have to be made.

To sum up…

At university we are taught about applying method A and applying method B on separate occasions. The realisation that I have made is that the key to success is to learn to apply the best from (A+B) with respect to the given situation. However, this is not easy as there is a shockingly large amount of things to consider at an attempt to obtain the best out of two contradicting solutions. In this particular case one has to take into account the already existing software systems of the company, developers, stakeholders, the business model, team size, and available budget. All the variables have to be weighed with risk and the best approach should be reached for each situation.

[1] Management Challenges to Implementing Agile Processes in Traditional Development Organizations. B. Boehm, R. Turner. IEEE Software September/October 2005.

[2] Disadvantages of agile Development. Kelly Waters, 2007. http://www.allaboutagile.com/disadvantages-of-agile-development/

[3] Get ready for agile methods, with care. B. Boehm. IEEE Computer 35(1):64-69, 2002.

[4]Agile Modelling: Overcoming Requirments Modelling Challenges. http://www.agilemodeling.com/essays/requirementsChallenges.htm