My recent experience with a horribly overdue project, which consisted of
probably every software planning fallacy possible, gave me an interesting insight into
getting a substantial part of it back on track. Some statements made in this
blog post will be exclusive to creating web applications, nevertheless
I would like to present it, since the way it relates to the grand scheme of
designing and maintaining large scale software is particularly interesting,
especially when designing components.
Vast numbers of software projects still fail (as in go over the time or budget), even though there are various software methodologies that aim to address the problem of intangibility in software projects and increase the chance of success. Furthermore, even applying the currently most hip methodology or anything-driven development we cannot even estimate if that does improve our chances of delivering the project on time within budget, as there is no single silver bullet.
Just Hack (and Slash)!
The decision which allowed the mentioned project to get back on track was to (surprisingly) remove people associated with the project. The mythical man-month stated that increasing the number of people will increase the time and cost associated with project, but claiming the opposite would surely seem preposterous.
Yet, After slashing (figuratively!) around half of the team the project was reevaluated. While keeping the same same functional requirements, the code base was reduced. General and heavyweight components were replaced by smaller ones, allowing to control the behaviour of the application not by tweaking and maintaining big components, but by connecting smaller components in a controlled way. Even though from the outer perspective there is little difference, the effect on the source code was astonishing — the code base was reduced to 20% of the original size.
Large, because it was… big?
Finally, the team started to get things done faster, even though it consisted of a smaller number of people. The code base was smaller, meaning that there was less maintenance required — but it also meant that there were less thing to test, so there was an additional increase of productivity, because implementing new fully tested features required writing even less additional code and allowed focusing more on productive programming (less tests -> lest new tests -> less code). In the aftermath, the original project was suffering under its own size, which was caused by the use of large components, which were required to allow a larger number of people to work on the project.
It is repeatedly stated that there is high percentage of projects failing to be delivered on time and within the budget, even though we — developers — have gained some insight about planning and executing such projects. We realise that certain decisions, like throwing people and money to projects at the latter stage are clearly pointless. However, maybe those relations hold in the opposite way as well. One way to ensure that a large project will not fail would be to limit the size of the project in the first place — agile methodologies address seem to address the problem in a similar way.
Finally, the idea of reducing the project size to prevent a large project from failing by reducing it size seems fairly straightforward and not exactly innovative, yet why the result of doing so felt so surprising?