The Law of the Jungle in Software Evolution

The law of the jungle is becoming more and more relevant for the field of software development. To stay competitive, projects constantly need to adapt to changing requirements, changing expectations and changing environments. Failure to do so means that the product will soon be forgotten and substituted with a competitor’s product. I will briefly discuss the stages a project goes through(as described by the Stage model) and comment on the dynamics of the jungle that the ICT sector has become.

The Software Jungle

Imagine that software projects are species of flowers. Naturally, each species aims to reproduce as much as possible in order to survive. Flowers reproduce by attracting bees(users) with distinctive colors(UI/responsiveness) and supplying them with nectar(content). The bees in return distribute the pollen and everyone is happy.

Let us journey through the hardships that flower X can face in this software jungle.
Initially, flower X occupies a field and lives happily in symbiosis with its users. But then the bees start getting fat from all the nectar and the flower needs to have bigger nectar holes. And out of nowhere, some monkeys come along and start eating the flowers, so flower X needs to start reproducing vegetatively as well. Sometimes, a hurricane carries some seeds to a far off field and flower X has to adapt to the new soil (like an OS or a changed API). It also needs to have some mechanism of recovering from stampedes like new restrictive government regulations or hackers. And on top of that another flower starts invading the field. Maybe it has brighter colors, maybe it has sweeter nectar, but for some reason bees prefer the new flower and start avoiding flower X. If it does not adapt, flower X will suffer a great blow in reproductive capabilities and will be well on its way of becoming extinct.

The Cycle of Life: Stage Model

According to the stage model, software projects go through five stages: Initial development, Evolution, Servicing, Phase-out, and Close-down. This section provides a concise description of each of those illustrating with examples.

  1. Initial development – engineers build the software from scratch to satisfy initial requirements. This stage is important in not only that it specifies the software architecture which will be vital for implementing changes later, but also because during this stage the members of the software team gain expertise in the field of the project.
    In practice, with the advent of Agile methodologies, the Initial Development stage gets shorter and shorter and the majority of the development is done in the following stages.

  2. Evolution stage – This is the stage when iterative changes, modifications, additions and deletions to functionality occur. Evolution is triggered by customer demands, competitive pressure and sometimes legislative changes(for example, the upcoming changes in Data Privacy).
    Note that the product need not be released when the evolution stage begins. The release date could be after several internal iterations addressing defects. Also, the system could be released in an alpha or a beta state before the final release.
    This stage is where we find the known-and-loved web giants like Google, Facebook, Spotify, etc. They are living their own Golden Ages – they have amassed a considerable user base and are constantly adding new functionality to stay competitive and address the demands of their users. However, history has shown that Golden Ages come to an end.

  3. Servicing – every project can only evolve for a limited amount of time. Evolution stops when the architecture of the project can no longer support new additions of functionality or staff changes leave the development team without experience. Changes in this stage are hard and expensive and often push the project even deeper in the Servicing stage.
    Examples of projects in this stage are dying out products like MySpace or ICQ (personal opinion).

  4. Phase-out – No more changes are being made, but the service is still available.
    Examples here include older games like Starcraft, Warcraft III, The Sims, etc., file-sharing programs like Kazaa and the upcoming end of support for Windows XP .

  5. Closedown – The service is shut down and users may be directed towards a replacement. For example, when a new MS Office comes out, an older version may be abandoned.


Software projects need to stay in the evolution stage in order to grow and stay successful. Unfortunately, Mother Software is not like Mother Nature and it does not have a built in evolution mechanism. It is up to the software team to manage the evolution of the product by iterative refactoring, restructuring and the addition of new functionality. “Survival of the fittest” is an accurate description of the competitive software market. Inevitably, all products will topple under their own weight and will be replaced by a newer service more suited to the recent changes in the dynamics of the market.
The decline of a software project is something natural and it is something that needs to be planned for.

Our new Constitution is now established, and has an appearance that promises permanency; but in this world nothing can be said to be certain, except death and taxes.

—Benjamin Franklin, in a letter to Jean-Baptiste Leroy, 1789

In the early days of Google Books, the founders of Google were overseeing the scanning process of a university library that had signed up for the service. However, their collaborator had gone rather silent and upon asking what’s the matter (and later to the Observer ) he replied

I’m wondering what happens to all this stuff when Google no longer exists.

“I’ve never seen two young people looking so stunned: the idea that Google might not exist one day had never crossed their minds.

-Unnamed librarian

Indeed, we need to address the question what will happen when Google is no longer around? If there is no plan for the migration of data, then will the decline of Google equate to the burning of the library of Alexandria? And if Facebook closes down, will you be able to show all your selfies to your grandchildren? You should start making backups now!

My claim is that sooner or later projects will fall prey to their competitors, but that does not mean that their users should suffer and lose their data. Even now books bought from the Apple store are incompatible with Kindle devices. There should be some kind of standard that service providers should be made to adhere to. The main problem is that such a standard will impose great limitations on developers and the evolution process. So, I’m asking you: should we slow down to make sure we’re going the right way?


The software market is a jungle of inter-connected products, where most of them have rivals who will take every opportunity to steal their users. This rivalry leads to evolution and expansion of the service, until such an evolution is no longer available. On one hand, this evolution is great because it drives progress. On the other hand, as a community we need to ask the question if this unmanaged growth of the whole system will not lead to some potential great loss of data or knowledge in the future. Are the things we abandon total garbage?

Futher reading

Software Lifetime and its Evolution Process over Generations
Even Google won’t be around forever
The Stage Model

The New Web : Reloaded

This is an expansion of the article “The New Web: The Tools That Will Change Software Development” . According to my understanding, the said article has two main points:

  1. Everything about the web has evolved – the content, the way users interact with it, the expectations a user has when he visits a website
  2. Development for the web has evolved

Obviously, the two statements are connected – the emergence of new tools can lead to the creation of new types of content and users demanding a new feature might bring about a whole new technology to tackle the problem. In other words there is a chicken-and-egg problem : is it the content that drives progress or the progress that drives content? I agree that much has changed(and is still changing!) in both web technology and demand for it and I will elaborate on these changes by providing some clarifications to the author’s original article and adding some overlooked tools.

What is the Web anyway?

Big. Google’s definition of the Web is “A complex system of interconnected elements”, but this is a rather vague definition because the same can be said for a human being or even a planet. I claim that the Web is so big that there is no single definition that can capture its essence. In effect, the Web has become a subjective experience and it is the interactions that each individual has with the Web that define it for him.
These interactions are seemingly endless and new ones are added each day. To stay competitive in this environment, businesses need to ensure they can support users interacting with their systems across a multitude of devices, screen sizes, web browsers and operating systems and that they can meet the demand that users have for their services.
These requirements have led to the emergence of new technologies that enable elegant solutions.

Developing for the Web

As the author of the original article suggested, web development was seen as a task unworthy for the experienced programmer. Today, advances in technology have caused the evolution from static websites to web apps which serve dynamic content personally tailored for their users.
A few years ago I was assigned to work on a website which sold tickets for various events. I was rather reluctant to join in because I had hardly had any experience with web development. However, taking that first step was an invaluable experience which opened me up to the world of developing for the web. Being connected with the Internet opens up a lot of possibilities and I am now very unlikely to take on a project that does not feature some kind of connection with the outside world.

Handling Demand with the Cloud

One of the main problems and also one of the main drivers of web development is the exponential growth of the Internet. The Internet population is estimated to be 2.5 bln or 34% of the total human population.

Preparing to handle this ever-growing demand is something you should address right from the start of your project if you expect to have a large user-base. If the demand is big enough, it is unlikely that a single machine will be able to satisfy it. This means that you will need some kind of distributed architecture. Unfortunately, building and owning such an architecture is really costly and it is unlikely that any start-up or small company will opt for this option. Fortunately, renting such an architecture is quite affordable and straightforward.
This is where PaaS and IaaS providers like Amazon Web Services , Heroku and OpenShift come in. They have the following features which tackle the problem of increasing demand:

  • Elastic scalability – this is the web development equivalent of pay as you go. Services are automatically replicated to meet the current demand on the system. This ensures that the system is responding as fast as when the load is normal all the while minimizing the price for the business.
  • Load balancing – this ensures that the load on each of the nodes of the web app are distributed equally.

Utilising these features brings you a long way to handling increased amounts of demand, but there are more steps that you can take to prepare for this issue.
The article that I am responding to has an amazing overview of all the JavaScript technologies available and I do not plan to repeat it. Suffice it to say that JS can decrease the load on the server by executing code client-side and I encourage you to go and read more about it in the original post .

Handling data

Another consequence of the ever-growing user base is the amount of data that is generated by these users. Each click can be enriched with location, age, gender, job and time of day and this information needs to be stored somewhere. Storage nowadays is cheap, but processing this amount of information is no easy task.
The costly joins and lookups in normal Structured Query Languages(SQL) are likely to get slower and slower as the data accumulates. A solution to this problem would be to use a NoSQL database that sacrifices consistency in order to achieve faster queries and allow for storage to be distributed across multiple machines(i.e. the database can scale horizontally).
The choice of a NoSQL database is problem-specific because there are multiple solutions and each one has its inherent pros and cons. The most popular NoSQL solutions are the document-store MongoDB and the wide column store Cassandra , but another notable mention is the graph database Neo4j .
Now what can one do with all this data? Remember that the Internet is now an experience. Thus, businesses should be aiming to make a tailored personal experience for their users. And to achieve this feat, they need to analyse the data they’ve gathered. To be useful, this data needs to be analysed frequently and in a timely fashion. However, the volume of the data is unlikely to allow for such an analysis to be done on a single machine. Fortunately, there are solutions to this problem as well and they all boil down to a technique known as MapReduce . This technique allows data processing on a parallel distributed architecture, thereby decreasing the runtime of the processing algorithms. Note that renting 100 nodes for one hour costs the same as renting 1 node for 100 hours. Thus, there is no extra cost for this parallelism.
There are many libraries that have some implementation of MapReduce, but it is most commonly associated with Hadoop .


The Internet is ever-growing and ever-changing. In order to be successful in this environment, businesses need to adapt and utilize the latest technologies which allow them to cope with the scale associated with their business model.

BDD: TDD done right

Test Driven Development(TDD) is a widely spread software development process that has proven successful in practice and is being adopted by more and more companies around the world. However, there are some dangers when applying TDD blindly. This blog post explores some common misconceptions about TDD and shows how they can be avoided using Behaviour Driven Development (BDD).

What is TDD?

Test Driven Development can be summarized as a procedure which follows these simple steps:

image taken from

  • Write an automated test case that fails for the new feature you want to implement
  • Run all tests and see that the new one fails
  • Write the code for the new feature
  • Run the tests, ensuring the new feature works correctly and that no other features were broken
  • Refactor – meet all coding standards and conventions that your team adopts and ensure the new functionality is in the right logical place in the project. Run the tests again to ensure nothing was broken during refactoring.
  • Repeat process with a new feature

This very short development cycle allows the developer to concentrate on just one specific problem and the completion of the cycle signifies that the new feature works correctly. Furthermore, the automated testing facility ensures that if changes impact this particular feature in an incorrect manner, then the test suite will immediately notify the developer of the problem. One more advantage of TDD is that developers are asked to think in more detail about the eventual use of the feature, which lets them get a clearer picture of how it is supposed to act and where it might fail.
TDD has been reported to decrease the number of defects and to limit code complexity. This is due to the fact that features are implemented just in time, which decreases the chances of overbuilding the system by implementing unnecessary features.

Sounds good. So what’s wrong?

Nothing! At least not in the technical sense. However, there is much which can be done to improve the human element of the system.
One of the main problems of the methodology is the usage of the term ‘test’. This automatically puts the developer in a validation mindset – and this is not the purpose of TDD. As instead, Test Driven Development should be a process to guide the design and not to overcomplicate things. This is where Behaviour Driven Development comes in.
In an attempt to escape the validation mindset it redefines the vocabulary to make developers view the process as a tool for specification. Now, instead of ‘tests’, BDD teams concentrate on behaviour and instead of ‘assertions’, they write method names like ShouldEqualFive() or ShouldRaiseNullException() which makes it easier to understand.
This simplification of the language makes the tests(it’s hard to get away from the word completely) not only a specification for the project, but also a sort of documentation and a validation tool that anyone(not only developers) can understand. This ease of use creates a ‘ubiquitous’ language for everyone in the development process – developers, stakeholders, managers and domain experts with specialist knowledge. The ease of communication that follows is very beneficial as it helps ensure that the team is building the right software – since the stakeholders understand it, they can just tell the devs that the specification is wrong. And what is even better is that it is often possible to get a sense of what the next most important part of the project is, thus reducing the time spent working on features that are less important and likely to change.


To describe the different behaviours, BDD uses a story template and a scenario template. The story template has the following syntax:
As a [X]
I want [Y]
so that [Z]
where X is the role requesting the feature Y which brings the benefit Z to X. Each story also has a title, so that it can be referred to by name.
Now these stories are associated with scenarios which go as follows:
Given some initial context (the givens),
When an event occurs,
then ensure some outcomes.
And I don’t even need to disambiguate what this syntax means. This simplicity allows everyone associated with the project to join in the design and specification phase and the scenarios that end up being written later become the tests that the developers use to create the product.

BDD viewed as an Expert System?

I am totally new to Behaviour Driven Development(hence the question mark in the heading), but all of these contexts,events and outcomes bring me back to an AI course I took some time ago. More specifically, we(it was a 2-person project, so I’ll stick with the we) were asked to implement a fitness instructor expert system. An expert system is a computer system that emulates the decision-making ability of a human expert. Now, we knew close to nothing about the world of fitness instruction and yet we were tasked to create a product that given some input would come up with an appropriate exercise schedule. The key to the project was to go around real-life fitness experts and ask them about their viewpoints and based on their answers, to create a set of logical rules that describe their knowledge. For example, if a person wanted to lose weight, then he needed to do more cardio, but if he had back problems then he was advised to avoid certain exercises. This set of rules highly resembled the scenarios described above.
Now, we knew what we needed to build, but we were lacking the building blocks. The next part of the project was to create an ontology. According to Wikipedia, an ontology formally represents knowledge as a set of concepts within a domain, using a shared vocabulary to denote the types, properties and interrelationships of those concepts. Sound familiar? Let me give you an extract from the Wikipedia article on OOP: An object contains encapsulated data and procedures grouped together to represent an entity. The ‘object interface’, how the object can be interacted with, is also defined. An object-oriented program is described by the interaction of these objects. As you can see, these definitions are really similar. The last part required us to just write up the rules in a formal system, which would interpret them and give us the output.
This is why I believe that the development of an expert system can be compared to BDD. But an expert system is crafted in such a way as to act as if it is a real human expert in the field. And since BDD mimics the mindset of the stakeholders, then the final product will be an expert in the field of the requirements of the stakeholders. Thus, the project is more likely to behave according to their desires.


Behaviour Driven Development is, in its core, the same as Test Driven Development. However, it differs in the way that people think about the process. BDD lets people abstract to a more general level and allows input from all involved parties. This makes it an outside-in specification and ensures that the final product meets most of the requirements of the stakeholders, thus increasing the quality of the project.

References and Further Reading

Introducing BDD – the original post by Dan North
Comparative Study of Test Driven Development with Traditional Techniques
Interview with Dan North on Behavior-Driven Development
From Test-Driven Development to Behavior-Driven Development