Peeling Away at the Software Maintenance Process

When you go to the shops and purchase a potato peeler, you receive the exact potato peeler that you picked up off the shelf. Five years down the line – if you have taken good care of your potato peeler – it will still work as well as it did on the day you purchased it. More importantly, however: it will still take the same form and have the same functionality as it did when you bought it (ignoring a few minor scratches, of course).

If one day when peeling potatoes you wished that your peeler could also be used to slice cheese into thin slices, you couldn’t simply take your peeler back to the shop in which you purchased it and ask the shopkeeper to exchange your old peeler for a newer model with all of the latest features. You could, however, buy the fancy new peeler outright if you so desired. It would be infeasible for the shopkeeper to keep on exchanging old items for new items in order to meet consumer demand, both in terms of practicality and – primarily – from a costing point of view.

The same line of thought can also be applied to large-scale software development projects. A survey conducted in 2010 – with contributions from over 2000 IT businesses both big and small – showed that the average software development company dedicated 55% of their annual budgets to performing maintenance tasks on pre-existing software solutions [1]. A second research survey suggested that the figure was more likely to be nearer two-thirds of a company’s budget – this is a massive figure [2]. While it appears that nobody knows for sure just how much the industry is spending on software maintenance, it is clear that the figure is steadily increasing as the years go by. It is my opinion that the budgets set-aside for the maintenance of existing software is too high, and could be significantly reduced if software development companies were less willing to bend to meet consumer demand.

It is well understood that there are forms of software maintenance which are unavoidable and – indeed – necessary: for example, bug fixes and the alteration of software to adhere to new or changing industry standards (e.g. a new memory management model being used as standard on new systems) [3]. However, there are concepts of the current understanding of ‘maintenance’ which I feel could be excluded from the process. The addition of new features to software is a common maintenance task. Whether it be a request and/or pressure from consumers to add to the existing software solution, or the development team actively choosing to augment the current solution, I feel that this sort of activity should not fall under the heading of ‘maintenance’ – but under another heading altogether. There has to come a point in the maintenance phase of the software’s lifecycle where the developer decides “enough is enough: the project has evolved into something too far-removed from the original concept.” When this point comes, the current time and effort being invested by developers can no longer be considered ‘maintenance work’, but another project entirely.

Returning to the example of the potato peeler outlined above: if you wanted to extend the functionality of your peeler, you wouldn’t be able to do it yourself (the average person couldn’t, anyway). The logical – and indeed, only – choice you would have would be to buy a new peeler which meets your new functionality requirements. The same concept should be applied to software. A development team will provide a company with the finished software solution to their outlined requirements and standards, and should from then on only perform routine maintenance to prune-out bugs and glitches. The addition of new features to the solution not outlined in the original requirements specification should be considered as ‘extensions’, and should constitute both a separate project and separate product. An example of this today would be downloadable content for video games: content which is not intended to be released as part of the initial solution, but can be purchased for an additional fee at a later date to extend the functionality of the original game. Using a solution such as paid software extensions could allow developers to shrink the size of their maintenance budgets, freeing up time and money to be reinvested in other projects the team is working on. Meanwhile, when creating additional content for software, costs of such an extension could be recouped in the charging of a fee to the client for the purchase of said content. Therefore, the maintenance budget has been reduced, and more revenue can be generated in the development of an ‘extension project’ for the client.

I can completely understand that, over time, the needs of a user can change. However, I feel that software developers should not be shackled to projects completed in the distant past. The solution delivered initially by the development team met the user’s needs at that given time: this should be the full extent of offering and implementing functionality to the project from the developer’s point of view. If the user needs the original solution to be moulded into another form, then they should have to fund this process – after all, you don’t see merchants of potato peelers handing out new models with sharper blades to existing customers when a new variety of tough-skinned potato is brought to market. If manufacturers of physical products don’t have to provide long-term maintenance, why should the developers of an intangible software solution have to?

The process of maintaining a large-scale software solution can be a very costly process. Over time, an initial solution can mutate into a completely unrecognisable form – nothing like its previous self. In my eyes, this should not be considered ‘maintenance’, but the development of a separate project. Maintenance budgets could be significantly reduced, and more revenue could be generated in the creation of functionality upgrades. The needs of a client can change over time, but it should not necessarily be the case that the development team should need to invest both their time and money in fulfilling such changes.

After all, if you aren’t happy with your potato peeler, buy a new one.

References:

1:
Technology budgets 2010: Maintenance gobbles up software spending; SMBs shun cloud | ZDNet. 2014. Technology budgets 2010: Maintenance gobbles up software spending; SMBs shun cloud | ZDNet. [ONLINE] Available at: http://www.zdnet.com/blog/btl/technology-budgets-2010-maintenance-gobbles-up-software-spending-smbs-shun-cloud/30873. [Accessed 14 March 2014].

2:
IT budgets to rise in 2013 despite downturn. 2014. IT budgets to rise in 2013 despite downturn. [ONLINE] Available at: http://www.computerweekly.com/news/2240174469/IT-budgets-to-rise-in-2013-despite-downturn. [Accessed 14 March 2014].

3:
Allan Clark. Software Architecture, Process, and Management (Slide 678). 2014. Software Architecture, Process, and Management. [ONLINE] Available at: http://www.inf.ed.ac.uk/teaching/courses/sapm/2013-2014/sapm-all.html#/678. [Accessed 14 March 2014].

Response Article to: “On scripting languages and rockstar programmers”

The article entitled “On scripting languages and rockstar programmers” [1] discusses the merits and applicability of the use of scripting languages in large-scale programming projects. For the most part, I am in agreement with the author – for example, I agree that scripting languages tend to have a “shorter learning curve” in comparison to lower-level languages. I also believe that there are paradigms within scripting languages which could be tailored to fit the needs of large software projects by devoted developers. However, there are aspects of scripting languages which concern me when it comes to attempting to apply them to large-scale projects.

Firstly, in their overview of scripting languages, the author states that the majority of these languages are “dynamically typed.” Whilst I can see the advantages that this can afford developers – such as quicker implementation and more efficient code reuse – I feel that there are also a number of downsides that, based on the given project, could cause problems.

Due to their being no limitations on which “types” a function can accept at compile time in scripting languages, programs can be susceptible to attacks by hackers – for example, a piece of carefully-crafted malicious code – which would have been detected at compile-time – slipping past the interpreter at run-time [2]. This can therefore make the use of scripting languages infeasible for security-critical systems such as banking software or systems which control access to confidential data.

I also believe that the use of scripting languages incorporating dynamic typing breeds laziness in developers to a certain degree. I of course can see that it can be more straightforward and faster to develop software using this form of typing, but I feel that it can inadvertently lead to software developers becoming careless and complacent over time, and – if working exclusively with scripting languages for a period of time – make mistakes more likely when using languages that require strict types for functions. While this may seem a very minor issue, it could still lead to hiccups in the development process.

The author of the article, when questioning the possibility of using scripting languages in large-scale projects, comes to the arguably-correct conclusion of “it depends.” They have acknowledged that for time and safety-critical applications such as nuclear reactor control systems, scripting languages can be too slow – mainly due to the fact that these languages tend to interpreted at runtime as opposed to being compiled beforehand. However, the author has left the door open for the discussion of less-than-critical scientific programs and their applicability in using scripting languages. One aspect of scripting languages which I feel can be improved in order to make them more applicable to a wider range of projects is to manually control garbage collection. A common occurrence in scripting languages is slowdown in performance whenever garbage collection is taking place: although this slowdown is usually only limited to a number of milliseconds, this can still have a negative impact upon some applications. By investing time in manually creating a more efficient garbage collector within a scripting language, the use of these languages can be extended to more scientific purposes where code is required to run optimally [3].

The author acknowledges the fact that, in general, low-level languages are more difficult to work with, and present a steeper learning curve for new users. Although they do take time to master, they can give the developer far more power and freedom to exert more control over their projects than any scripting language. My point here is, although the author acknowledges the fact that these are difficult languages to use, they should not simply be discounted solely for this reason alone. The advantages they may offer could possibly far outweigh the costs of taking the time to utilise them to their full potential. As with any language, once you have overcome the initial learning stage, the hard work is done and you simply need to practise occasionally to keep your knowledge intact: learning the language initially may prove time-consuming, but the knowledge gained can then be re-used on later projects. So, whilst there may be a quicker alternative to low-level languages in the short-term, greater benefits may lie ahead in future projects.

In the original article, the author made a number of good points in support of the use of scripting languages in large-scale software development, and outlined some of the benefits that these languages can afford over traditional, low-level languages. To a large extent, I am in agreement with the author: scripting languages are more easily-accessible for new developers, and can be used to develop solutions to non-critical software quickly. There are also methods by which these languages can be tailored to meet the needs of a client. However, there are some issues which are often overlooked – such as security of software in using such languages – that could prove damaging for software, especially in confidential environments. Ultimately, in deciding upon whether or not the use of scripting languages within a large-scale software development project is suitable, it all comes back to the author’s original conclusion: “it depends.”

References:

[1]  On scripting languages and rockstar programmers, s1038803.
https://blog.inf.ed.ac.uk/sapm/2014/02/14/on-scripting-languages-and-rockstar-programmers/

[2] Dynamically typed languages, Laurence Tratt, Advances in Computers, vol. 77, pages 149-184, July 2009

[3] Herbert Schildt, 2004. The Art of C++, page 9. 1st Edition. McGraw-Hill Osborne Media.

 

Agile Methodologies in Large-Scale Projects: A Recipe for Disaster

The four “Agile Values” written within the Agile Manifesto are as follows:

Individuals and interactions over processes and tools
Working software over comprehensive documentation
Customer collaboration over contract negotiation
Responding to change over following a plan

The values above are to be interpreted in such a way as to value the notions on the left-hand side of the “over” more than those on the right hand side. I propose, however, that in large-scale software development, this is infeasible, and that the left-hand side of each of these values (which are more akin to traditional development models) are far more appropriate in such projects.

Working Software over Comprehensive Documentation

Lack of formal documentation or planning could appear unprofessional to clients, who may deem the project to be too risky. In large-scale projects where clients – who may even be government bodies – are likely to be spending upwards of millions of pound. Therefore, there needs to be a formal hierarchy of responsibility in the event that the project fails.

In a large development team over the duration of the project, informal means of documentation can be swallowed-up and forgotten about. It can also make the future maintenance of the project far more difficult – especially if new developers have joined your team since the project’s creation.

If, for example, a multi-billion pound software project for use in schools signed-off by the government were to catastrophically fail and be considered a write-off , members of the public and political backbenchers would conduct a scathing witch-hunt into why their hard-earned tax money had been squandered. Either way, the government is going to
be subject to ridicule. But the damage can be limited, however, if formal documentation of detailed plans can be produced to show that deep investigation into the project’s scope and requirements had been extensively carried out. If however, the government were to metaphorically shrug its shoulders and say, “We don’t have any proper formal documents to show you: however, we did meet up with a few of the developers for a brain-storming session using a whiteboard and napkins,” heads would certainly roll.

Of course, I am slightly exaggerating the realities here, but I feel that the outcome would almost certainly be the same.

Individuals and Interactions over Processes and Tools

There is a reason that set processes and tools are used in large-scale software development: they work. They provide an established set of rules and standards for developers to abide by, ensuring that the software development process is as efficient and consistent as possible. In allowing the individual personalities of developers to encroach upon the project, standards can begin to slip. Sometimes, there is no need for innovation or creativity in solving a particular problem. Take the notion of code reuse as an example: it can sometimes be easier and far more time-efficient to simply re-use an existing block of code from a repository, where crafting a new solution – which may indeed be more elegant
or objectively better than the current solution – may only provide marginal bene fits to the project in the long run.

At such a large scale, I feel that innovation can only lead to chaos and serve to hamper the project – if everyone was to decide to innovate and deviate from the standards set out by management, any sense of consistency within the project would surely be lost. It may sound like I am advocating the stifling of originality and personality within large-scale projects: to some extent, I am, but when the stakes are so high (in terms of the money involved in large-scale projects), it seems more responsible to use tried-and-tested methods of development as opposed to new and original practices and ideas that may contain hidden flaws.

Customer Collaboration over Contract Negotiation

The use of an iron-clad contract in a large-scale software development project is crucial. Don’t get me wrong: I can of course see the bene fits of customer interaction and ensuring that you are delivering a product that is exactly what the client is looking for: however, the more you collaborate with a customer during the development process, the greater the opportunity you leave for the customer to take advantage of you. The more you allow the client to alter the original plans, the longer the project will take and costs will rise. Of course, it would be the customer who would need to meet these costs, not the development company. I feel, however, that with a detailed plan from the beginning that
clearly defi nes the scope of the project, the requirements it must capture and other details such as the deadline, that the project will be far more likely to be completed on time and within budget.

Understand that I’m not saying that no interaction between the client and developers should take place: I am merely saying that the majority of this communication should take place before the implementation begins. In a large-scale project where the goal is constantly changing and looks far from the initial concept envisaged, there comes a point when a developer needs to take a stand and say, “You are asking too much of us – you asked us to do *this*, and now you want us to do *that* – the complete opposite?”

In order to alleviate this situation, more time should be dedicated initially to planning each intricate detail of the project, before the actual development work begins. Take as much time as necessary, ensuring both you and the client are happy and in agreement. A formal contract can then be compiled.

The point that I am trying to get across here is that all too often the developers of large-scale projects are all too ready to bend over backwards to meet the desires of clients whenever a new idea enters their minds. The compilation of a formal contract would protect the developers. It may be acceptable – and indeed, manageable – in a smaller project to allow for changes to be requested throughout the development cycle. In a large-scale project, however, even a small number of changes could create a huge amount of extra work – creating a “ripple e ffect” impacting upon various di fferent areas of the
project.

Responding to Change over Following a Plan

In smaller-scale projects, it may be feasible to react to changes at the request of the client (for example, a change in requirements). With quick and efficient communication, changes can propagate quickly. Within a large-scale software development project, however, with many di fferent levels for information to pass through, communication can become bogged-down, leading to delays in development. In following a stead-fast plan, communication channels and practices throughout the development team can be established early on. This helps to ensure a consistent flow of information around the  team: with no dramatic changes requested, no speci fic area of the development team will come under intense pressure, and no “ripple eff ect” takes place. Following a plan also  helps to keep the project on track in terms of time and budget: making changes is expensive.

Having a plan to stick to allows the project’s progress to be measured accurately – each stage in the development cycle can almost be “ticked-off ” after completion. When change occurs, it can sometimes be hard to calculate if the project is any further forward (or indeed behind) in terms of its development. Some changes can also spawn further changes, and create even more work not envisaged when the team initially decided that changes were indeed necessary.

There are some changes that cannot be helped, however. Changes to industry standards or APIs used in development may mean that fundamental changes may be necessary. In times like these, there is no option but to make alterations to the initial plan. Of course, I completely agree and fi nd this acceptable. However, changes to a requirements specification or a request to modify the behaviour of a range of already-implemented features mid-way through a project’s development I find to be less acceptable.
Large-scale software development projects are a serious undertaking – requiring huge investments of time and money – and should be treated as such by clients.

Concluding Remarks

In large-scale software development projects where clients have invested large sums of money and taken huge risks, I feel that it is only responsible to consider a more traditional approach to development. Trying to cut corners in choosing not to write formal documentation could be disastrous for the future of the project, and allowing clients to get too involved in the development  could allow the project to descend into chaos. A tried-and-tested traditional methodology, making use of set processes, tools, formal documentation and binding contracts help to ensure the success of the project – both in terms of deadlines and expenditure.

Large-scale development projects are serious business: agile development has no place here.

References

http://agilemanifesto.org/