The right way to develop software in foreign countries

Motivation behind developing software abroad

As more countries open to foreign trade, firms gain an access to a wide pool of opportunities to seek resources abroad. They can export products to new markets, establish relationships with foreign firms or move production to new locations. The trend is reflected in the industry, as 50% of American Fortune 500 companies are using offshore software development in their business[1]. Countries offer different prices for resources, for instance the cost of developing the same software is 50% less in India compared to the US[2]. The savings come mainly from the lower labour cost. Moving software production to another country can also make the firm have access to wider labour pool and therefore skills and expertise.

Software development business models

The software development can either be onshore, meaning that it stays in the same country as the firm or offshore meaning that it is developed abroad. It can also be insourced meaning that the same company needing the software is developing it, or it can be outsourced, meaning that the client company contracts another vendor to do the development. The business model of developing software is therefore divided into four categories: onshore insourcing, onshore outsourcing, offshore insourcing and offshore outsourcing. The complexity of conducting these business models is displayed in Figure 1.


Figure 1: Process complexity of different software development business models

Source: Prikladnicki, R. et al., 2007. “Distributed Software Development: Practices and challenges in different business strategies of offshoring and onshoring”. International Conference on Global Software Engineering” (ICGSE 2007), pp. 262–274[3]

Challenges of offshore development

Companies that engage in software offshore outsourcing suffer from challenges related to culture of people involved in the process. Because two nationalities with different organisational cultures are exposed to working together, there is a high chance of clash where work practices of each of them do not match. For instance, if a German company sends its development to India it may find itself adjusting to more relaxed and laid back attitude towards being on time. If the company offshores to a country where the language is an issue, it may suffer miscommunication problems such as incorrectly understood requirements specification by the developer. Also, because the company is engaging in a new partnership, initially it may suffer from trust issues. For instance, the objectives of the developer may not be aligned with the objectives of the client. The developer for instance may want to just develop the specification and cash in on that without looking at the long-term code quality and such.

The process of offshoring the development may mean that the overall cost of production will be smaller given lower labour costs. However, there are many transaction costs associated with the production. For instance, the cost of more coordination or management and supervision of the offshore developer. Also, the country’s infrastructure has to be taken into account. For instance, if the educational institutions are weak in a specific state, then the labour pool, even though cheaper, may not be suitable for employment or require more investment into initial training. There is also the risk of miscommunication between companies in the process, which may mean that wrong product will end up being developed and therefore lead to a potential costly project failure. In a report produced by Gartner, the company states that 50% of offshoring software development projects fail due to all the challenges they face[4].

Deciding on the right business model

Because of all the hurdles of global software outsourcing, it is deemed the best to relocate only highly structured work. The creative part such as problem understanding, requirements engineering and specification generation is best to stay in house, whereas the “manual work”, such as highly structured programming, can be relocated abroad. This will ensure that the savings are achieved through workload relocation, while the quality of the product is not negatively impacted, as the client company develops its strict software specification and testing.

In his “In Defense of Not-Invented-Here Syndrome” blog post[5] Joel Spolsky arrives at a similar conclusion. In it, he says that the core of the business, where its competencies are shining has to be developed in house, whereas the functions that are not core and can be substituted, may can be relocated to elsewhere. Companies should not let go of their unique capabilities and not be afraid to relocate substitutable activities. For instance, if the company is great at requirements engineering, whereas mediocre at programing, it should do the former in house, whereas not be afraid to substitute the latter for more beneficial alternatives such as outsourcing. However, if the company is exceptional at programming it should recognise it as its unique capability and not outsource it. This situation is illustrated by Figure 2 below.


Figure 2: Parties in Outsourced Software Development

Source: Batra, D. 2009. “Modified Agile Practices for Outsourced Software Projects” Communications of the ACM, vol. 52, no. 9, pp.143-148[6]

Having interned in one of the web development vendors in India, I have seen many of our US and UK clients approach outsourcing incorrectly. They have contracted my office to develop their core product, for instance a main web portal that will be used to generate revenue for the client. The specifications we were receiving for these projects were not specific enough and resulted in us making many design mistakes while developing. For instance, we had to invent the database structure ourselves, only to later realise that we did not understand the client’s needs completely and had to redesign them. This instance was one of several where the client along with programming outsourced the requirements engineering parts and involved us in decision-making rather than just coding.

In my opinion the decision whether to develop software abroad is context specific and depends on the nature and the size of the project or capabilities of both the client and vendor. If the company decided to develop abroad to gain cost advantages, it should first acknowledge the cost and benefits of both offshore insourcing and offshore outsourcing. When deciding to use the first model, it should understand that it will be able to control its production, however it has to understand the culture and the infrastructure of the country it relocates to to avoid project failures related to the international expansion risk. If deciding to offshore outsource, it has to understand that it will be able to achieve higher cost savings, as it will not have to worry about running a firm in a foreign environment, at the expense of less control on the product quality. In my opinion the offshore insourcing is a better business model, as the company has more control over the quality of the product, which in software development is the most crucial competitive advantage the company can have. Offshore outsourcing may be more lucrative in terms of savings, but at the same time much more risky, as the outcome of such partnership is more difficult for a client company to fully control. In either scenario, before investing a lot in outsourcing the company has to first gain experience on how to best handle it. To minimise the risk it may want to start small, with possibly onshore outsourcing, seeing how the process develops and then expanding to offshore opportunities.


On the surface seeking cheaper labour in foreign countries may seem like a plausible idea. However, when looked at holistically, offshoring thr software development may introduce many problems and hidden costs that may make the initiative produce a loss. The decision to outsource is therefore context specific and depends on the project and capabilities of the client, the vendor and the countries involved.


[1] Carmel, E and Agarwal, R. 2002. “The maturation of offshore sourcing of information technology” MIS Quarterly Executive, vol. 1, no. 2, pp. 65-78

[2] Carmel, E. 2003b. “The new software exporting nations: success factors” The Electronic Journal on Information Systems in Developing Countries, vol. 13, no. 4, pp. 1-12

[3] Prikladnicki, R. et al., 2007. “Distributed Software Development: Practices and challenges in different business strategies of offshoring and onshoring”. International Conference on Global Software Engineering” (ICGSE 2007), pp. 262–274

[4] “Gartner Says Half Of Outsourcing Projects Doomed To Failure”. URL: Date Accessed: 10/03/2014

[5] “In Defense of Not-Invented-Here Syndrome” by Joel Spolsky. URL: Date Accessed: 10/03/2014

[6] Batra, D. 2009. “Modified Agile Practices for Outsourced Software Projects” Communications of the ACM, vol. 52, no. 9, pp.143-148

May the source be with you

Open source software is a hot topic at the moment. More and more businesses and people are choosing to use open source products over more traditional proprietary ones. The advantages and disadvantages of using open source have been well discussed and documented. This article will analyse some of the most frequently discussed reasons for avoiding the use of open source products from three different points of view; the general public, businesses and the experts (aka computer programmers). Please understand though that I am not claiming that I myself am an expert, nor indeed am I stating that all programmers should be considered experts. I am simply stating that the specific technical knowledge and overall computing skills acquired from working on software development projects give developers a unique standing in the debate.

First, some clarifications

Before discussing further I would like to clarify exactly what is meant by the phrases; proprietary software and open source software. Proprietary software is typically distributed to users for a fee under some licensing agreement which gives them the right to use the software if and only if they uphold a set of rather restrictive conditions. These conditions are there to prevent the user from committing heinous crimes such as modifying, sharing, studying, redistributing or reverse engineering. In addition to this the source code is not available. Basically the people who distribute proprietary software want their product to be used exactly as it was intended, or else! [1].

In contrast, the source code for open source software is always made available. The software and its source code are provided to the user with a license agreement which gives them the right to modify, distribute and use the software for any reason they want. It should be noted that open-source software and free software are not exactly the same thing, however for the purpose of this article we will assume that open-source means free [2].

Open source can be difficult to use

One argument which is often brought up in the open source debate is that open source software can often be difficult work with due to the fact that it potentially requires a certain amount of technical expertise to operate [3].

First off, I would like to suggest that this argument is flawed. The technical knowledge required to work with a piece of software is influenced more by the nature of that particular software than whether or not it is open source. In other words it really depends on who the intended end user is. For example word processors are a peice of software that practically all computer users require; everybody will have used one at some point. As such, installing and operating a word processor requires minimal technical skills even if you choose an open source one like OpenOffice Writer. In fact from my own personal experience open source word processors have been far less frustrating to work with than our old friend Microsoft Word. Moreover software aimed at very niche market, such as command line tools for data analysis will often require good technical understanding to use due to the technical nature of the service they provide.

I do accept that there will be some cases in which proprietary software will be more user friendly than its open source alternatives.

I need support!!

Another point which is commonly raised as an argument against the use of Open Source is that open source tools often lack a proper support network for their users [3]. This is true, however just because there isn’t a traditional infrastructure for providing support does not mean that users are abandoned to simply figure out on their own.
On the end of a quick Google search there is a whole community of users and developers available to help with your queries. Often this support comes in the form of a forum. The advantage of this over a more traditional helpline or user manual is that it provides a channel for you to ask the people who actually wrote the software. Furthermore, these people will often be quick to respond as open source developers typically have feelings of pride towards the software they produce. Perhaps this pride stems from the fact that open source products are not typically developed for financial gain but simply just to provide a solution to a problem. Surely this approach to support is preferable than waiting on hold for 15 minutes to speak to someone who most probably had nothing to do with the development of the software.

Well which should I choose then?

It depends on who you are and what you need.

If the end user in question is a computer literate member of the public then the answer is no! As long the open source option in question is not totally impossible to work with then it will not take long for one person to get used to using it. Additionally the open source option will give them free access to a piece of software which is continually being improved by members of the community. Rather than paying for software which only be maintained until such times as the new version is released then if you wish to see continued improvements you must pay yet more money for the new license.

Alternatively, if our end user is a business then the decision becomes more complicated. Rolling out a piece of software which is potentially frustrating to use could have negative effects on productivity. The management must be sure that the savings achieved from choosing the open source version outweigh any potential losses due to a drop in productivity. However since the open source version is free it would be easy to have a trial run with and see how well people adjust to it. If there is a general feeling that the open source version is good enough then go with it.

What about these aforementioned ‘experts’? In my humble opinion software developers have no excuse for neglecting to use open source software. Especially if the reason for this is that the software is not very user friendly. All software developers will have at some point been through the excruciating task of trying to fix some that is not working for mysterious incomprehensible reasons. My point is that overcoming the challenge of actually writing a piece of software is far more difficult than spending a short period of time learning how to use one. Furthermore the beauty of choosing open source as a software developer is that if you can take full advantage of that clause in the license agreement which gives you the right to modify your copy of the program. You can actually own software which does exactly what you need it to do. Surely this is better than the alternative, paying for software which does what someone else thinks you need it to do.


In conclusion the decision to use open source software over proprietary software should not be made based on some bias towards a particular approach. Instead it should be an informed decision made on a case by case basis. Open source is a great idea, it saves the user money and gives them greater flexibility bit this not reason enough to disregard proprietary software every time there is an open source alternative. However it is reason enough to make sure you always at least consider open source as an option.



Happiness Optimization


The hiring process for a new employee costs about $4,588 in the US [1] or £5,311 in the UK[2]. For Software Developer roles the cost might be even higher. Moreover, when we add additional costs like salary, taxes, benefits, training and equipment we will end with quite a big sum spent on a single employee.

Big companies and corporations try to lure the best candidates with a wide range of benefits and competitive salaries. In addition, each candidate is sifted through a complex and involving interview process  for only the best employees to be chosen [4].

The only thing which companies require back is a productive employee generating profit…

Unfortunately they tend to forget the most important and obvious fact:

A happy developer is a productive developer

Or perhaps, of more relevance, is the negative of this sentence:

An unhappy developer is an unproductive developer

This article will discuss some practices which may lead to the increase of happiness and (effectively) productivity of software developers in a company. It is impossible to cover all of them in only one blog post, but I believe that these are the most important ones.


Build projects around motivated individuals.

Give them the environment and support they need,

and trust them to get the job done.

Agile Manifesto

The most important part is to trust each and every new software developer in a company. We should think about them as artists or craftsmen doing their creative work and we should always give them room for problem solving.

Trust in your employees is sometimes more than that. When the level of trust is high, programmers start to feel unique and energized. With many challenges to work on, we are creating space for growth and self development. In addition, by giving time to simply play, hack and work out potential solutions, we are more likely to get them in the most (and probably the best ) productive state of mind – Flow [6].

The most interesting example of trust-building that I’ve seen was what Nordstrom tend to give their employees on the first day:

Welcome to Nordstrom

We’re glad to have you with our Company. Our number one goal is to provide outstanding customer service. Set both your personal and professional goals high. We have great confidence in your ability to achieve them. So our employee handbook is very simple.

 We have only one rule: Use good judgement in all situations.

Please feel free to ask your department manager, store manager, or division general manager any question at any time.

Tools and a Process

 In addition to state of flow and hacking, we should always provide programmers with the best possible tools and adjust the whole software development process to their needs.

According to  this  discussion, we can split programmers into two groups.

The first group loves playing with their toys. They view creating a software like a sandbox in a playground, so providing them with new tools, equipment and giving them possibility to create “cool” things means a lot to them. (If you don’t support my claim then why would programmers spend hundreds of hours discussing which text editor is the best [7]?).

Moreover, when a team is provided with inadequate tools or forced to use specific tools or when there is only limited number of licences available, programmers will start to feel like replaceable components. For example, in many financial organisations and bigger corporations you cannot install your own software without permission from management.

The second group loves delivering working solutions and is driven by a feel of accomplishment. They are focused much more on a bigger picture and they seek reward in form of appreciation or self-fulfilment from their achievements.

 This part is very well discussed in video on RailsConf 201by Chad Dickerson, CEO of Etsy [9]. He discussed performance of factory workers in early 90s and used some ideas related to software development:

 “The traditional assembly line deprives the worker of satisfaction… by the confinement of the worker to one manipulation repeatedly and endlessly which denies the satisfaction of finishing a job.”  [9]

He discussed an interesting story taken from a book [10], where workers assembling parts of the aeroplanes (modules like engine, wings etc) were unsatisfied and productivity started to decrease. They resolved this problem by organising a meeting with pilots in planes and having discussion about which parts the workers were working on. After this productivity raised, because they were given a purpose and a sense of meaning.

“If companies really want their workers to produce, they should try to impart a sense of meaning – not just through vision statements but by allowing employees to feel sense of completion and ensuring that a job well done is acknowledged.” [9]

 Lastly, I would like to point, that no matter what, every programmer loves to work with certain type of equipment. It is viewed by them more like a lucky pen or certain routine which is necessary. We can find plenty of examples discussing what equipment and tools a programmer is using in new company on their blogs [11]

Working Hours

One of the most obvious, yet another one which is often forgotten by employers, is allowing programmers to work for no more than 40 hours a week. There are three important reasons behind this.

 First of all, programmers are often working on creative things. In order to solve problems they need the full firepower of their brain and a different way of seeing things. It is impossible to come up with the perfect design in one day no matter how much time you spend on it.

Secondly, when you work for a long time you start to make more mistakes. It may be the case, that instead of producing good code you are injecting bugs into the code, which might be difficult to find and not easy to repair.

Lastly, it is all about knowing what a team can produce with sustainable pace without depleting their energy resources. Instead of working 80 hours a week for a month before the release of the project and crashing afterwards it is better to work 40 hours a week for two months. This approach will definitely lead to much better results and higher team morale [12].

Other sources also support the 40h/week regime:

From The Art of Agile Development [15]:

“When you make more mistakes than progress, it’s time to take a break. If you’re like me, that’s the hardest time to stop.I feel like the solution is just around the corner—even if it’s been just around the corner for the last 45 minutes—and I don’t want to stop until I find it. That’s why it’s helpful for someone else to remind me to stop. After a break or a good night’s sleep, I usually see my mistake right away.”

 From [13]:

“1. Improve focus.  How many people can claim they have 100% focus every day, 40 hours a week?  Focus is absolutely critical in building quality software and it’s something we should absolutely optimize for.

2. By giving your employees a little more time for themselves they’re able to take care of personal errands during off hours.  And that’s exactly how it should be anyway.”

Eat Together

Something about sharing meals breaks down barriers and fosters team cohesiveness. Try providing a free meal once per week. If you have the meal brought into the office, set a table and serve the food family-style to prevent people from taking the food back to their desks. If you go to a restaurant, ask for a single long table rather than separate tables.

The Art of Agile Development, Supporting Energized Work p. 79-80 [15]

 There are plenty articles on the web about benefits of eating a dinner with your family [14]. Some of them are applicable to team members too. I would personally say that there is nothing in the world which boost team morale, cohesion and spirit rather than lunches together.

Let’s look on this from another angle – providing good communication and integration within a team is extremely important – after all the team is developing a software together. If most of “socialization” takes place during mandatory meetings over a conference table, it is much better to build some relationship over the lunch table.

However we should have a look on much more than eating together. Giving your employees nice and comfortable place to eat and rest is extremely important. Every human being needs a break from time to time, so having a place designed especially to that will be beneficial and probably this is a reason why we can find examples of colourful offices from Google, Facebook or Amazon.

I would say that we should go much further than that. Providing your employees with free lunches and onsite cantinas may tighten bonds much more. After all, there would be a place where your employees can eat, chat and have fun together. In addition, providing them with free lunches will save the time for going to grab takeaways [11].


There are a lot of examples of a good techniques and practices which can lead to productivity boost. Some of them are obvious (but still a great majority of employers are not following them) another are vague and it is difficult to see explicit connection to productivity.

Aforementioned practices may have a much bigger impact on programmers’ performance than increasing salary, which doesn’t always work in boosting productivity. Even when it does, it is usually in short term. Creating energized atmosphere and controlling developers need will help to boost their productivity, design and code quality.

This might be a reason why most programmer would rather work for Google, rather than Banks, even if salary in the first place would be a lower.


[1] Hiring an Employee: How Much Does It Cost?  –


[3] How Much Does An Employee Cost? –

[4] Cracking the Coding Interview: 150 Programming Questions and Solutions

[5] On Developer Happiness and Productivity –

[6] The Flow – Programming in ectasy

[7] Editor War –

[8] Respect People: Trust Them to Use good Judgement –

[9] Optimize for Developer Happiness at Etsy –

[10] Concept of the Corporation, Peter F. Drucker

[11] Price of Developer Hapinness –

[12] What Silicon Valley Developers Can Teach Us About Happiness At Work –

[13] Optimizing for Happiness in Software Development –

[14] The Benefits of Eating Together –

[15] Shore, James. The art of agile development. ” O’Reilly Media, Inc.”, 2007.

Patterning with Care – The death of the Singleton

Very often patterns become so ingrained into a programmer’s thinking that they fail to question why they exist in the first place. Like a language feature that gets slowly phased out as people realize the problems inherent with the design, patterns are simply conventions of thought that should be continually criticized. Adderio et al.[1] describes this best, by pointing out that “Patterns are only a reduced, abstracted, subjectively filtered version of someone else’s knowledge and experience”. Often, without a critical filter, students gratuitously apply common patterns, believing they are best practices in all cases.

The idea for the article came from being involved in a group project where a team member insisted on using singletons to solve a particular problem. Having never explicitly implemented singletons myself, I did some research and stumbled on an age old debate over their usage. I was convinced that singletons were not necessary and could even be detrimental in the long run.

Though the following sections focus on why using a Singleton pattern is completely unnecessary in most programs, it serves to prove a larger point on a common belief that design patterns provide solutions rather than just suggestions. Although singletons have fallen out of fashion, I want to resurface the debate because it provides a cautionary tale for the blind adoption of patterns.

The Singleton

Singleton patterns were most famously explained in the seminal book ‘Design Patterns: Elements of Reusable Object Oriented Software’ [5]. Outside of this text, it is commonly explained in a one-liner; a singleton is ‘a class with at most one instance and with a global point of access.”

The example below (in java) gives the clearest representation of the concept (NB: there are safer and more efficient ways to create them using enums and static inner classes).


I believe the original intent of a singleton was to avoid expensive and unnecessary duplication by guarantying single instantiation in the class itself. However, as with most misunderstood patterns, the purpose was lost in translation and people began using them to replace global variables [6].

However, due to ‘the single point of access’, singletons do not avoid any of the dangers that are inherent in using globals. The first of these is an unwitting state change, where the global or singleton is modified in a section of code, but another function assumes it has not changed. Multiple references to the same object is another issue stemming from a same idea, where a local variable name may mask a reference to a global variable (e.g. when it’s passed as a parameter). Thirdly, it goes against the principle of modularity since classes are tied together by dependencies to the global / singleton and you can no longer re-use them in other programs. Furthermore, the dependencies in the design can no longer be deduced from looking at the interfaces of the classes and methods, which can lead to confusing bugs if the program grows in complexity. The drawback also trickles through to unit testing since tight coupling to the environment makes the use of mock objects difficult without making modifications to the code.

Consider the Alternatives

Considering why singletons exist in the first place, it’s perfectly reasonable to require a single store for data in a program and for code sections to access the same data.  However, singleton patterns are not the best way of doing this.

Let’s separate the two goals of a singleton – ‘single instantiation’ and providing ‘a global point of access’.

As is it usually implemented (above example), the single instance feature is defined as a behavior of the class itself. However, in most cases having exactly one instance of the class is a requirement of the application. Implementing a singleton in these case fundamentally contradicts the ‘single responsibility principle’ of OOP, whereby a class should only worry about the one business functionality it was created to perform. Put another way, it should have ‘only one reason to change’ [7], whereas singletons change if the behavior of the class needs to be modified or if we later discover multiple instantiations maybe necessary.  For example, singletons most commonly still appear in the design of logger classes to provide global access to a log-file without the expense of creating and closing new file-access objects[8]. The intent is sound, but there is nothing inherent about the logger class that says only one can exist. In fact, it is conceivable that a different application will want to reuse the logger class to instantiate two types of logs. So why not give this responsibility of instantiation to the application and essentially reduce the singleton to a normal class? This makes far more sense to me.

The ‘global point of access’ can be implemented simply by using a globally accessible method that passes an instance of the class to the one that requested it. Although the problems of global data structures still persist, now you can choose when global points of access will be appropriate for a particular program rather than it being enforced in a singleton class. For example, we could implement a builder object (i.e. a factory pattern) to either create an instance and reuse it in any function that requests it, or create new instances and pass those to requesting functions. By encapsulating the creation in another class, we’ve also separated the responsibility of global access from actual behavior of the singleton class.

Even when a singleton pattern seems like the best option, it’s important to understand why and consider the alternatives available. Usually when multiple instances of the class will break the program, it’s because there exists properties in the class that shouldn’t be duplicated. Instead of making the whole class a singleton, make the properties static to the class and thus invariant to new instantiations. Using private static properties and public getters and setters, which can also be made thread safe, we can mimic the intention of a singleton without compromising on the points made above.


After a surge of popularity, the singleton pattern was slowly phased out of best practices as developers realized its problems and considered alternatives. In the case of my project, after a little convincing we decided to use a normal class and factory because they would be more flexible and save headache down the line.

The simple patterns that developers learn first are usually the ones that are least likely to be questioned as they gain more experience. However it’s imperative that the idea and the logic behind the pattern is understood to the extent that we avoid their abuse and look to improve their design.



[1] Adderio L., Dewar R, Stevens A. Lloyd A. (2002) “Has the Pattern Emperor any Clothes? A controversy in three acts”, accessed on 10/3/2014 at <>

[2] David Geary (2011), “Simply Singleton”, accessed on 9/3/2014 at

[3] Deru (2010) “Why Singletons are Controversial” accessed on 11/03/2014 at

[4] Etheredge J. (2008), “The Monostate Pattern”, accessed on 12/03/2014 at

[5] Gamma E., Helm R., Johnson R., Vilssides J. (1994), “Design Patterns: Elements of Reusable Object-Oriented Software”, Addison-Wesley

[6] Miller A. “Patterns I hate” (2007), accessed on 12/3/2014 at <>

[7] Martin, R. (2002). “Agile Software Development, Principles, Patterns, and Practices”. Prentice Hall.

[8] OODesign Guide (2010), accessed on 10/3/2014 at <>

The Law of the Jungle in Software Evolution

The law of the jungle is becoming more and more relevant for the field of software development. To stay competitive, projects constantly need to adapt to changing requirements, changing expectations and changing environments. Failure to do so means that the product will soon be forgotten and substituted with a competitor’s product. I will briefly discuss the stages a project goes through(as described by the Stage model) and comment on the dynamics of the jungle that the ICT sector has become.

The Software Jungle

Imagine that software projects are species of flowers. Naturally, each species aims to reproduce as much as possible in order to survive. Flowers reproduce by attracting bees(users) with distinctive colors(UI/responsiveness) and supplying them with nectar(content). The bees in return distribute the pollen and everyone is happy.

Let us journey through the hardships that flower X can face in this software jungle.
Initially, flower X occupies a field and lives happily in symbiosis with its users. But then the bees start getting fat from all the nectar and the flower needs to have bigger nectar holes. And out of nowhere, some monkeys come along and start eating the flowers, so flower X needs to start reproducing vegetatively as well. Sometimes, a hurricane carries some seeds to a far off field and flower X has to adapt to the new soil (like an OS or a changed API). It also needs to have some mechanism of recovering from stampedes like new restrictive government regulations or hackers. And on top of that another flower starts invading the field. Maybe it has brighter colors, maybe it has sweeter nectar, but for some reason bees prefer the new flower and start avoiding flower X. If it does not adapt, flower X will suffer a great blow in reproductive capabilities and will be well on its way of becoming extinct.

The Cycle of Life: Stage Model

According to the stage model, software projects go through five stages: Initial development, Evolution, Servicing, Phase-out, and Close-down. This section provides a concise description of each of those illustrating with examples.

  1. Initial development – engineers build the software from scratch to satisfy initial requirements. This stage is important in not only that it specifies the software architecture which will be vital for implementing changes later, but also because during this stage the members of the software team gain expertise in the field of the project.
    In practice, with the advent of Agile methodologies, the Initial Development stage gets shorter and shorter and the majority of the development is done in the following stages.

  2. Evolution stage – This is the stage when iterative changes, modifications, additions and deletions to functionality occur. Evolution is triggered by customer demands, competitive pressure and sometimes legislative changes(for example, the upcoming changes in Data Privacy).
    Note that the product need not be released when the evolution stage begins. The release date could be after several internal iterations addressing defects. Also, the system could be released in an alpha or a beta state before the final release.
    This stage is where we find the known-and-loved web giants like Google, Facebook, Spotify, etc. They are living their own Golden Ages – they have amassed a considerable user base and are constantly adding new functionality to stay competitive and address the demands of their users. However, history has shown that Golden Ages come to an end.

  3. Servicing – every project can only evolve for a limited amount of time. Evolution stops when the architecture of the project can no longer support new additions of functionality or staff changes leave the development team without experience. Changes in this stage are hard and expensive and often push the project even deeper in the Servicing stage.
    Examples of projects in this stage are dying out products like MySpace or ICQ (personal opinion).

  4. Phase-out – No more changes are being made, but the service is still available.
    Examples here include older games like Starcraft, Warcraft III, The Sims, etc., file-sharing programs like Kazaa and the upcoming end of support for Windows XP .

  5. Closedown – The service is shut down and users may be directed towards a replacement. For example, when a new MS Office comes out, an older version may be abandoned.


Software projects need to stay in the evolution stage in order to grow and stay successful. Unfortunately, Mother Software is not like Mother Nature and it does not have a built in evolution mechanism. It is up to the software team to manage the evolution of the product by iterative refactoring, restructuring and the addition of new functionality. “Survival of the fittest” is an accurate description of the competitive software market. Inevitably, all products will topple under their own weight and will be replaced by a newer service more suited to the recent changes in the dynamics of the market.
The decline of a software project is something natural and it is something that needs to be planned for.

Our new Constitution is now established, and has an appearance that promises permanency; but in this world nothing can be said to be certain, except death and taxes.

—Benjamin Franklin, in a letter to Jean-Baptiste Leroy, 1789

In the early days of Google Books, the founders of Google were overseeing the scanning process of a university library that had signed up for the service. However, their collaborator had gone rather silent and upon asking what’s the matter (and later to the Observer ) he replied

I’m wondering what happens to all this stuff when Google no longer exists.

“I’ve never seen two young people looking so stunned: the idea that Google might not exist one day had never crossed their minds.

-Unnamed librarian

Indeed, we need to address the question what will happen when Google is no longer around? If there is no plan for the migration of data, then will the decline of Google equate to the burning of the library of Alexandria? And if Facebook closes down, will you be able to show all your selfies to your grandchildren? You should start making backups now!

My claim is that sooner or later projects will fall prey to their competitors, but that does not mean that their users should suffer and lose their data. Even now books bought from the Apple store are incompatible with Kindle devices. There should be some kind of standard that service providers should be made to adhere to. The main problem is that such a standard will impose great limitations on developers and the evolution process. So, I’m asking you: should we slow down to make sure we’re going the right way?


The software market is a jungle of inter-connected products, where most of them have rivals who will take every opportunity to steal their users. This rivalry leads to evolution and expansion of the service, until such an evolution is no longer available. On one hand, this evolution is great because it drives progress. On the other hand, as a community we need to ask the question if this unmanaged growth of the whole system will not lead to some potential great loss of data or knowledge in the future. Are the things we abandon total garbage?

Futher reading

Software Lifetime and its Evolution Process over Generations
Even Google won’t be around forever
The Stage Model

Management at Valve, as seen through the Valve Employee Handbook


The Valve Corporation is a Bellevue, Washington based company known for its award winning Half LifePortal, and Team Fortress games, as well as the extremely popular digital distribution service and multiplayer framework known as Steam. Valve employs a unique management structure based on little to no formal hierarchy, emergent order, and peer relations. Yet this structure demands an extremely specific type of employee, making hiring and firing a complex and burdensome affair. Despite that, it has amply demonstrated its enormous advantages, and should be examined by more software developers for good practices to emulate.

Exploring the Valve Employee Handbook

In 2012, Valve deliberately made its handbook for new employees public by uploading it to the Web. The company had long been secretive about its internal structure, with the best look inside coming from Michael Abrash’s blog post on his first days at Valve. Valve’s handbook is written in a casual and quirky style, yet it provides a fascinating look into the structure and day-to-day operations of one of the world’s most successful game developers.

Valve is a company with a totally flat hierarchy. While the company does have an owner, the CEO Gabe Newell, he takes pains to avoid interfering in the day-to-day management or even larger decisions. There are no managers, no centralized plans, and little in the way of formal positions. All employees are free to do whatever projects they find most interesting, and to form or dissolve groups as they see fit to pursue these projects.


Valve refers to its internal project groups as “cabals”, but these groups are not created by managers or other dedicated decision makers. Rather, they form spontaneously by employees agreeing to accomplish a common goal – often when they observe another employee working on an interesting project. Cabals are non-compulsory, and employees are encouraged to join and leave them as they see fit in order to provide the best benefit to the company and themselves.

Within cabals, employees can gain some level of management authority by adopting that role within the group, but the position is both non-explicit and temporary. An employee was was a manager in one cabal might end up being a programmer in the next depending on his skills and the project’s needs. In the end, the most important thing is to make productive use of your own skillset in whatever way seems best.

Peer Evaluation, Ownership, and Funding

Valve uses a stack ranking system inherited from Microsoft, in which employees periodically give performance evaluations for their teammates and adjusts compensation accordingly. These are then fed into a company-wide system, which determines the allocation of funds across projects. Disputes are adjudicated on a case-by-case basis, and are usually settled amicably. These evaluations are supplemented by repeated, anonymous peer reviews of each employee’s progress, as well as a culture which encourages repeatedly polling your peers for feedback on progress and ways to improve.

Valve is wholly owned by Gabe Newell, with no external capital to make demands on the company for higher returns, and Newell has demonstrated a repeated ability to avoid dictating the company’s direction from above. Valve also possesses a constant and huge revenue stream in the form of Steam, the dominant digital distribution system for PC games, which allows it to explore a variety of projects in its own time without fear of running low on funds.

Hiring and Firing without Hierarchy

Valve repeatedly emphasizes throughout their handbook the importance of hiring. Hiring is noted as the most important function in the company – and one that every employee takes part in. After all, with no dedicated jobs (like hiring manager) or managers (of departments like HR), there’s no particular group who could be relied on to hire new employees. Due to its unusual nature, Valve places extremely heavy emphasis on hiring employees who will fit into the company culture and actively contribute to its projects. The process is slow and deliberative, and employees are actively encouraged to comment and sit in on interviews to help determine if the new applicant meets their standards.

Valve never explicitly discusses firing procedures in the article, which is an interesting topic to skip. However, their in-house economist discussed the topic in an interview, and recent news about layoffs confirms that they do occasionally fire employees. Exploring these sources (and others) helps us build up a picture of why Valve is so careful when hiring new employees, and so evasive about how it fires underperforming ones – firing is extremely long and painful.

As a company with a flat structure, a critical consensus has to build before an individual is let go. This usually requires offering multiple opportunities for turning things around, including meetings and evaluations so that the employee in question can improve. Once the decision is made to terminate, there is no protocol or managerial authority to hide behind – the employee has been kicked out of the company by his peers, which even in the best case is likely to result in social tension. So Valve institutes a rigorous system of vetting potential employees to avoid the pain of firing them, their own model of employee management forcing them to ensure that every employee hired is also an effective manager of themselves and the company as a whole.


Despite these limitations, Valve is an enormously successful company with tremendous market dominance and a strong future. Their flat model clearly works, even if it requires an owner willing to allow employee management and a strong company culture. Some suggest that Valve can only function this way due to its extremely high profitability per employee, but this ignores that Valve’s flat structure was adopted long before it was successful. While exporting the entire model might be difficult, anyone interested in software management should look to Valve for an alternative model of management and organization.

Is Your Software Usable?


In practice, the usability or user-centered design (UCD) technologies are underused in software engineering. This is because usability or UCD is not only related to computer science, but also involved by psychology and other subjects. What’s more,  usability technologies are developed separately outside the software engineering world.  They have their own theories, concepts, models, and tools. Therefore, usability technologies may be difficult for software developers to understand and implement.

This article is about to introduce the basic concepts of software usability and explain why it is important to software development. Then, we will explore a potential solution to integrate usability engineering into software development and the methods of usability evaluation.

What is software usability?

It is a preconception, “Usability only refers to the appearance of the software product.”

Usability or UCD is a term related to the concepts of multiple areas, like computer science, psychology, human factors and so on. Briefly, I think usability is a user-centered measurement based on the subjective perspectives and experience of users. The aim of usability is to reflect the quality of the interactions between software products and users.

One of the most popular explanation of usability is based on five dimensions [1][2]: learnability, efficiency, memorability, error rate, satisfaction. More specifically, learnability measures the time required to make a new user become skilled in using a particular software product. Efficiency means how convenient the software is to help a user improve his productivity. Memorability refer to how quickly a user can pick up his knowledge about the software product after a period without using it. And the error rate here means how many errors a use may make when performing his tasks. Finally, satisfaction represents how willing a user may be to use the software product.

In my opinion, measuring software usability based on these five attributes may be inappropriate. Because some criteria are overlapped with others. For example, memorability is highly related to learnability. If a software product is easy to be understood and learned, it may probably be efficiently memorized and be quickly picked up after a period of no using. Efficiency should take error rate into consideration, because high error rate can reduce the performance of efficiency significantly. Moreover, satisfaction is actually influenced by all the other criteria. It should be regarded as the essence of the usability, since usability is a user-centered measurement. In my view, it is arguable that  there may be only two key attributes of software usability, understandability and efficiency. Understandability means software product can be understood and learned quickly by user without too much information needed to remember. Efficiency means required tasks can be performed by simple and a few operations which can also reduce the errors made by users and increase users’ productivity.

Why it is important?

In my view, the center of the software industry is user. High usability can lead to the high popularity of the software product. However, low usability can reduce the original values of the product and users may prefer using other alternative software products with good user interface. Let’s make up a simple but inappropriate example, we designed a more powerful tablet OS than Andriod or iOS, but users have to type command to perform all the tasks. That can be a huge disaster to users.

Usability engineering is not just a decorative process after the whole software product is already built. It should take part in the software development life cycle early. Because some end-users’ interests or preferred behaviors may influence the functions of the software, even the architecture of the system. For example, for a particular software, users may prefer being able to cancel any ongoing tasks under any conditions and backup to the previous state. This kind of need may affect the system design.

Integrating usability engineering in software development

The good point for usability engineering to engage in software development life cycle may be the use case analysis phase which is widely used in modern iterative software development [3]. In each iteration, a usability team will focus on task analysis. task analysis is about to explore the expectations of end-users, how users perform their tasks, what is the responsibilities of users and understand the motivation and strategy behind users’ behaviors. The main methods of task analysis are site visit, interview and survey. Once usability analysis and use case analysis have been done, the usability team and the software development team should negotiate with each other to investigate the factors that may influence the future design of each team. Communication is extremely important between two teams. Since these two teams are speaking different languages and using different tools, it is necessary to have at least one ‘bilingual’ in each team who has basic knowledge about the technologies used by other team to make the communication more effective and efficient. After discussion, the usability team will  focus on interface design and prototyping. And then test and evaluate the usability of the prototype for the next iteration.

Usability evaluation

There are a great deal of methods that can be implemented in the evaluation of usability, like usability testing, thinking around, and heuristic evaluation. Among these technologies, heuristic evaluation is the most popular one. This method performs usability tests with a group of usability experts. They test the prototype based on a set of principles to discover and identify usability problems. Due to its cheap and efficient nature, heuristic evaluation is a very good fit in iterative software development. However, usability is a kind of subjective measurement, views from a small group of experts may bias the results, especially when it comes to the public. So, I think, besides heuristic evaluation, we can implement thinking aloud approach at the same time in the early development life cycle. The less complete our product is, the more problems can be directly exposed by users. These negative attitudes can provide experts more psychological information for future heuristic evaluation.


In this article, we introduced the concept of software usability that usability not only refer to the appearance of the user interface but also relates to other areas like psychology and human factors. It is also essential to recognize the importance of user-centered design. We also explored the possible approach to integrate usability engineering into software development and the methods of usability evaluation. I think software development can benefit a lot from the integration of usability engineering. Because user is the center of the software industry, a powerful software need an efficient and easy understandable interface to support and attract end-users to perform their tasks.


[1] Holzinger, A. (2005). Usability engineering methods for software developers. Communications of the ACM48(1), 71-74.

[2] Ferré, X., Juristo, N., Windl, H., & Constantine, L. (2001). Usability basics for software developers. IEEE software18(1), 22-29.

[3] Seffah, A., & Metzker, E. (2004). The obstacles and myths of usability and software engineering. Communications of the ACM47(12), 71-76.

Risk Reduction Patterns in Real Life Scenarios

Over the course of the software development lifecycle, a project could be exposed to risk to different extends. Top risks might not allow for delivering the system of need. These could be absence of clear requirements, lack of qualified personnel. In order to minimise the effect of these dangers, it is essential for these risks to be identified. However, as these concepts could turn up quite vague, the process of mitigating risks could be challenging. Cockburn’s risk catalog outlines potential problems and suggests solutions. In this article, I will describe a project I worked on as a part of the team, and I will discuss how we applied risk reduction patterns in order to mitigate our project risk. Additionally, I will explain what other issues arose over the course of the software development. This proves how challenging the process of identifying and diminishing risks could be.

The project I will reference is the famous System Design Project all informatics students in the University of Edinburgh get to do. Eight students are put together in a group and asked to design a lego robot and to develop the software which enables to robot to play football.

A fundamental problem at the beginning of the project was the team’s lack of knowledge about the issue. Many did not have experience with larger scale software systems, and nobody had any experience with robotics and software-hardware integration. Therefore, when the first task came up, the robot had to execute a kick command, the team was thrown into confusion. There was substantial lack of knowledge and we were unable to put together a sound plan beyond deciding who is building the robot and who should be responsible for the software for the task. Therefore, in order to proceed, the team had to gather  knowledge. However, the process of “clearing the fog” continued for substantial amount of time and was quite hectic; there was no working code within the next few days. Even though we immediately tried to tackle the problem, we did not manage to find the right balance and were still fighting the risk of non-delivery.

Further on, as we were still not completely confident about how to design and integrate the systems, we built up a prototype – a simple robot which could receive commands and execute a movement of the motor. This clarified the direction of the project, and even though it was a simple solution, it helped us discover how the system works and integrates. It was the first deliverable result produced. Therefore, creating a prototype was crucial for dealing with the risk of not possessing enough knowledge on the matter, and helped us kick off. We succeeded applying this pattern because we managed to apply the accumulated knowledge and produce a solid base for the system.

One of the most serious issues our team experienced was dealing with ownership. In the early stage of development, two groups of people were working on the code for achieving the upcoming first milestone by implementing their own ideas without consulting the entire team. While there were significant conflicts in the team in regards to who is responsible for these functionalities, important areas did not get significant attention. This was the vision system, and developing this modulel accurately and reliably was essential for the success of the project. Once it was established that significant ownership problem existed in the team, we decided to manage the conflicts by setting up border lines regards people’s responsibilities. However, this overcorrected the problem. Every aspect of the project was assigned to groups of people, which were micromanaged, i.e. members of these little groups were to assign details to each other. All of these decisions happened after extensive discussions about people’s skills, interests and preferences. Therefore, conflict management took substantial amount of team’s time and effort instead of this focusing on the software development itself, and did not assure 100% delivery. The approach switched from being a little hectic to being extremely structured and led to overcorrection.

Ownership by component was later adopted. We managed to establish clearly people’s responsibilities for particular functions, and we did not have any friction between function and component owners. However, between those who worked on the strategy module, it remained unclear who is responsible for integrating the developed strategies together. Conflict resolution was not needed anymore, but the integration job remained a gray spot until quite late in the development process.

At a much later stage when the project was coming to its end, the final goal was close and clear – the robot had to enter a football competition. A week before the final, we did not have a working code for the match, but the team was still distracted with numerous task. People focused on improving details of the vision system, getting rid of unused code and refactoring, writing reports. All of these problems kept the team from keeping up towards achieving the primary goal – performing well in a game situation. However, it was the team leader who kept on working on the game strategy. He managed to put the foundations of the code used at the final, and thanks to his progress at that time the team managed to prepare adequately in time for the game. Ensuring someone was working towards achieving the primary goal while team members deal with secondary tasks was crucial for fighting the risk of not delivering the system of need.

There is very fine balance between dealing with risk appropriately and over correcting an existing problem. In order not to introduce additional aggravation of the development process, this balance has to determined as early and accurately as possible. The scenario described proves that this is not a trivial task, especially for inexperienced teams.


Chose the right Software Architecture if you want your Open Source project to succeed.

While open source is by definition different from closed-source projects in terms of governance, I believe that the right architecture needs to be chosen if an open-source project is to receive uptake by users and long-term contributions from these same users. The architecture needs to be as modular as possible, and the developers must be able to take different approaches to solve the problems that each new feature aims to solve.

Why is software architecture so important to Open-Source collaborative projects?

While in proprietary software, the developers and users are mostly separate sets of people (except developers of development tools maybe), in Open-Source software the main driving force behind the project is that the users can themselves further develop functionality in the software and contribute it so that other users can benefit from the work. Since now users and the developers may be one and the same, the architecture that the software uses is of great interest to the user. This idea is also presented by Eric Raymond in his 6th rule [1]:

6.Treating your users as co-developers is your least-hassle route to rapid code improvement and effective debugging.

If the software that is being used is very modular in design, it should be easier for a user to add her/his required functionality to the program, without having to understand the rest of the code other than how his new feature will interact with the rest of the software.

Another important aspect in my opinion is how easily the feature can be implemented in a way that is independent from the rest of the software. That is, how free is the new contributor to choose whatever solution is suitable for him to implement the feature, keeping within the design rules set out initially?
For a new user of the software that finds a missing feature, would it be easier for him to add the functionality he requires, or would it be better for him to use another solution, or develop the solution from scratch? The learning curve needs to be kept minimal for the user to actually contribute. It may also be that open source projects will be competing between them when they have similar uses and audiences.
These are the same factors that Baldwin and Clark [2] use in their attempt to use game theory to show that architecture is important in limiting the free riding of open source software, and how architectural decisions help promote the contribution by users. According to the model, having a modular and option value (ability to implement new feature in the easiest way for the developer) is optimal in allowing exchange of valuable work among the developers. And when the resources for contributing to open source software are already limited [3], we really want to minimize the hurdles that such developers encounter to enable them to easily reuse or contribute new code.

Do developers chose this architecture beforehand, or does it come about because of the nature of the collaborative software?

Although the architecture may be crucial, most open source projects that are based around collaborative input by many voluntary developers usually do not start out with the final scale in mind. Moreover, since many open-source projects use agile development methodologies, having a clearly defined architecture from the start is not the norm. However, the argument here is that to facilitate contribution, the modularity of the code should be kept as a high priority to keep the project going more easily. This may come at a cost of increase workload, since refactoring continually will increase workload, and may also make it difficult to keep track of all the modules and how they interact. However, in their research [4], Capra et al. show that the good quality present in open source projects is not just a coincidence. They argue that design quality is both an enabler for the type of governance present in such projects, where everyone can easily collaborate and contribute, as well as being an effect of it. Since the contributors have a sense of ownership of the code, the design quality is given greater importance in the work, possibly at the cost of prolonging the time between releases.

Is this any different from what proprietary software companies try to do?

Within closed-source projects, the design is always kept internal, and the company knows that it has a number of developers that it can commit to the project, whether the developers like it or not. This means that there is less risk of the project being abandoned because it is too hard to contribute to, and the architecture will not influence the number of developers that are contributing to it. However, such companies may take a leaf out of this book, because developers do have feelings and opinions, and making their life easier can only help to improve the quality of the code that they generate for the company. We also know that once a new developer is assigned to a new project, there is a learning curve associated, which can be greatly reduced by having a modular architecture, where the developer does not have to learn how the whole code base works before they can start contributing to the project. This would cut down the cost of personnel moving between projects. It would also make the software malleable, so that new features can be added more easily, and code reuse is also much easier.

This difference in architectural choices is shown in the work of MacCormack et al. [5], where they compare the source code of Linux with that of Mozilla when it was first published in the public domain. The lack of modularity in the Mozilla code is clear, and in fact the Mozilla project underwent a lot of initial work to be modularized, so that it could function better in the open source world.

Interestingly though, all this is arguably contradicted by the empirical study carried out by Paulson et al. [6], where they claim that the results do not support the claim that open-source software is more modular than closed-source. However, the metric they chose to use to verify this hypothesis was correlation between the number of new functions introduced to the number of changes to previous functions. They claim that the number of changes to older functions increased with the number of new functions added, implying that the introduction of new functions may be requiring changes to other functions, indicating that the code is not as modular as claimed. I feel that this is a very simplistic metric, since it simply assumes that in a good modular system functions are only added and no refactoring is required. I believe modularity should be measured by the dependencies between modules, rather than the changes being carried out. Another contributing factor to this result, which the authors point out themselves, is that the projects chosen were mature projects (namely Linux, GCC and Apache) so maintenance may have contributed to the high amount of modifications to functions. Given all this, I still believe that the argument for modularity in open-source projects holds, since we can clearly saw the benefits of such a system, even if they were not currently used as Paulson et al. claim.

So What?

So in conclusion, when we stop to think what architecture the software project that we are about to start should have, whether the project is an open source project or not should be a major factor to consider. Assuming that we do want the project to not be abandoned if the lead developer (us) happens to take a break.


[1] E. Raymond, “The cathedral and the bazaar,” Knowledge, Technology & Policy, vol. 12, no. 3, pp. 23–49, 1999.
[2] C. Y. Baldwin and K. B. Clark, “The architecture of participation: Does code architecture mitigate free riding in the open source development model?” Management Science, vol. 52, no. 7, pp. 1116–1127, 2006.
[3] S. Haefliger, G. Von Krogh, and S. Spaeth, “Code reuse in open source software,” Management Science, vol. 54, no. 1, pp. 180–193, 2008.
[4] E. Capra, C. Francalanci, and F. Merlo, “An empirical study on the relation-ship between software design quality, development effort and governance in open source projects,” Software Engineering, IEEE Transactions on, vol. 34, no. 6, pp. 765–782, 2008.
[5] A. MacCormack, J. Rusnak, and C. Y. Baldwin, “Exploring the structure of complex software designs: An empirical study of open source and proprietary code,” Management Science, vol. 52, no. 7, pp. 1015–1030, 2006.
[6] J. W. Paulson, G. Succi, and A. Eberlein, “An empirical study of open-source and closed-source software products,” Software Engineering, IEEE Transactions on, vol. 30, no. 4, pp. 46–256, 2004.

On makefiles and build scripts

tl;dr Using makefiles for small projects is a significant overhead over simpler solutions such as build scripts. However, ultimately it is worth it because of it encourages re-usable solutions and future-proofs the project.

The virtues of automated builds

Build systems (e.g. Ant, Maven, Make, Rake, Cabal, etc) are a great way to automate a lot of redundant tasks in the software development process:

  • Install dependencies
  • Compile and link
  • Run unit tests
  • Perform static analysis
  • Code style checking
  • Deploy

And all of that with a single shell command: the invocation of your favorite build system.
Furthermore, having a “one-click” way of setting up a piece of software will aid new users and contributors: who hasn’t been discouraged from using a piece of software (or contributing to an open source project) because it required a convoluted process to get running/tested/code reviewed?

But I can do that with a shell script!

Many build systems offer additional advantages that are indispensable for large projects: for instance, many build systems will only re-compile parts of a project that have changed which might save hours of compile-time on large projects.

However, what about small projects? Compile times are instantaneous, development teams are small or just a single person, overall project complexity is lower. Are build systems still worth it over alternatives such as a shell script with similar functionality?

Let us first have a look at some of the disadvantages of using a build system for small projects.

Many developers will already be relatively familiar with shell scripting, for example because they use the command line and Unix utilities to analyse logs. Writing a shell script to compile/deploy/etc will likely not take much time. Build systems, on the other hand, would be yet another skill to learn, hone and maintain.

Build system recipes are often “write once and forget” pieces of code: once a build file works as required, there often is little reason to go back and tinker with things. Therefore, it is likely that developers will have to re-learn the syntax and quirks of their build system whenever they touch it. (I sure do.) Queue context switching, mental overhead, getting distracted with looking up features of the build system rather than getting relevant work done, etc.

Some build systems have a substantial learning curve and often work in non-intuitive ways. GNU Make is an especially striking offender in this regard: superficially makefile recipes look like shell script syntax – but with some surprising gotchas (e.g. separate lines in a recipe fork in different sub-shells, indentation should be tab-stops, etc).

Some tasks are more difficult and non-intuitive to do in a build system than in shell scripts (e.g. handling file names with white-space, trying to do loops or conditionals in declarative makefile style, different ways to set the java classpath in ant, etc).

Basically, using a build system over a simple shell script in a small project will incur a significant mental overhead (yet another infrequently used tool, context switching, etc) at not much directly apparent gain.

And yet you should use a makefile

Nevertheless, using a proper build system over build scripts is very likely the way to go, even for small software projects.

A build system gives the user lots of code for free (e.g. easily referenced command line options, shell tab completion, etc). Not having to re-implement functionality time after time is a good thing.

Using a build system often means that all sorts of functionality is consolidated in one location (e.g. build, deploy, run tests, etc). Handling command line arguments is a bit of a pain in shell scripts – it is therefore likely that multiple shell scripts would be written instead of one build system instruction file. Queue more places to check, more code to maintain, possible code duplication, etc.

Similarly, build system scripts encourage the division of tasks. The natural way of structuring a build system script is to have small targets implementing atomic units of work (e.g. compile or run unit-tests) grouped together by larger recipes (e.g. one target grouping pre-deploy instructions, one target grouping post-deploy instructions, etc). This encourages good design and is likely to make the build system script re-usable, easier to test and maintainable. While the same effect is achievable with shell scripts, the design of the language does not encourage the same extent of division of labor.

On a related note, the complexity of build system scripts grows linearly (just add a target and leave the rest untouched) where build scripts can quickly dissolve into an in-extensible mess (“so I need to add this option here and that flag there, but now I need defaults for this function…”).

Additionally, many build systems operate in a declarative way (as opposed to the imperative nature of scripts). This makes the developer thing about the “what” of their requirements, not the “how”. While initially more difficult to comprehend, this likely leads to more re-usable solutions.

In essence, using build systems over hand-rolled build scripts encourages good development practices, increases re-usability and is likely to lead to maintainable solutions.

Future-proofing for free

The conclusion of the last section ties in nicely with an additional huge advantage of using build systems over build scripts: we get future-proofing for free! A software project being small today does not mean that it has to remain small forever. Using build systems instead of build scripts prepares a project for future growth.

It is much easier to “productionize” a build system recipe than a build script (e.g. GNU Make gives us free parallelism when compiling, running tests, etc with the -J option).

Build systems are an abstraction over the underlying operating system and shell. This makes build system recipes more portable meaning less work when the project has to be deployed to a different environment (e.g. there is a version of GNU Make for Windows).

Many build systems are industry standard ways of managing projects in certain languages (e.g. make for C, Ant for Java, Rake for Rubby, etc). This means that not having a build system recipe means that a project might be difficult to integrate with the rest of the software world (e.g. Qt for Android having a build script instead of a makefile was a show-stopper for integration with mainline Qt).

Using a build system is simply the right thing to do! They are indispensable for large scale software projects. While annoying, using them on small projects is an excellent learning opportunity, gives lots of code and power for free and prepares the project for when it becomes the next big thing.