In 1984, George Orwell wrote about the dangers of absolute political authority in an age of advanced technology. In a somewhat less politically charged piece, I aim to look at how the advances in computing technology might influence the future of software development .

I will argue that a key trend in software development is that code is shifting from being efficient to being understandable and that this has been possible because of increases in hardware resources. I extrapolate this to the future and argue that increasing hardware resources will lead to further abstractions and that we could see use of programming languages as we know them today decline, with more visual methods of programming becoming prevalent.

Moore’s Law

Moore’s Law tells us that the number of transistors on integrated circuits doubles approximately every two years.

Or it used to.

For the purposes of this blog I’m going to conveniently ignore the fact that we are reaching the limits of what we can do with silicon. I’m going to ignore that Moore’s Law is slowing down and that some people think it’s end is nigh. Arguably it’s been dead for a while anyway, with computer architects redefining it to suit what they thought was more realistic. I am going to trust that humankind will do what it has always done – innovate.

I am going to assume, simply, that computers will get faster. I do not think this is unrealistic.

From punched cards

As late as the mid 1980s, programmers were still using punched cards for much of their work. To me, born in 1992 and studying computer science today, the thought of doing this is nigh incomprehensible. The strength of that word, incomprehensible, just goes to illustrate how far we have come in just 30 years (or perhaps that I am stupid, but I’d prefer to think the former).

Assuming progress continues at even a fraction of what it did in the past, the future, even the near future, will look incredibly different to the software developer. He, or she, will have vastly different technologies at his or her disposal, and vastly greater computing resources to work with.

So how might this affect the process of software development?

The architects are making us lazy, but this isn’t bad

To see how the development of computing technology might affect us in the future, let’s look at how it has affected us up until now.

One simple way of putting it is by saying that programmers are getting lazier. No, I don’t mean they spend most of their time doing nothing (well, maybe some do). Rather, that they don’t deal with things that a programmer of the last generation would have had to. We have abstracted away from a lot of the gory details of programming, and this is because of the additional resources we have gained from advances in computer hardware.

For an example of this, contrast programming a Java application with programming for an embedded system. With Java we don’t have to worry about trivial things such as memory allocation. We don’t have to know what our architecture looks like, how many registers it has, whatever. Java will do all that for us. Maybe it won’t do it as cleverly as a person could, but who cares. We have lots of all that hardware stuff, we can waste some.

I actually struggled to think of examples for that last part, and that makes the point better than any example ever could. Programmers just don’t care about the low level any more, except in specific, niche, cases. We got lazy, because we could.

Except we didn’t, not really. Our focus has simply shifted. Instead of writing uber efficient code that will run on our totally state of the art at the time yet actually remarkably rubbish computer of the past, we instead aim to write readable, modular, DRY, <insert more SAPM buzzwords here> code to run on our actually really quite good machines of the present. We care more about other people being able to understand it our code than almost anything else, after all, how else could large scale and long term projects be completed?

And I think this is the key trend that will affect the future, as well.

So can we be lazier?

Why yes, yes of course we can. And we should be, so long as the hardware can compensate for the increasing levels of inefficiency.

‘How could we be lazier?’ you ask. ‘How can we abstract more?’. ‘How can we make our  code more understandable?’.

The simplest way is to write less and less of the code ourselves and let the computer do it for us. This is already happening – compilers perform high degrees of optimisation and scripting languages allow us to express more and more in less and less lines of code. Compiler technology will continue to get cleverer and cleverer. As well as allowing us to use hardware resources more efficiently, compilers will get better at optimising code, allowing us to be lazier when programming it. I predict that scripting languages will become more and more prevalent, with programming languages becoming higher and higher level, to the point that they more resemble natural language than the programming languages we see today.

However I predict this will be taken a lot further, with IDEs generating code for us in bigger and bigger ways, possibly providing design patterns as off the shelf template solutions at the click of a button. Commonly programmed features should also be available at a button press.

Taking this even further, I think programming itself will become a lot more visual. A picture, or diagram, can be a lot more expressive than text, is more quickly understood. This is the purpose of UML, after all. Rather than using UML as a sketch or a blueprint, I think we will move on to using it as a programming language. GUI interfaces will be designed in WYSIWYG editors inside of IDEs.

A lot of these technologies exist already, but aren’t used because they produce relatively rubbish code, or don’t provide the right functionality. I think their usage shall increase. My argument for this is threefold:

  1. The code these methods produce can be made better. The tools can be improved, be it at application level or compiler level, or somewhere in between.
  2. We will care less that the code is rubbish, because our hardware will be awesome and our pictures will be pretty.
  3. With the tools being more viable, their functionality will be improved, to a point where they are comprehensive and expressive enough to be used.

With these three points coming together, there will be no reason not to use more abstracted means of programming, even ones that seem infeasible or not useful in the present day.

Response Article: “Agile Methodologies in Large-Scale Projects: A Recipe for Disaster”

In the article “Agile Methodologies in Large-Scale Projects: A Recipe for Disaster” the author argues that agile methodologies are not suitable for large scale projects.I would like to claim the opposite – that agile methodologies are suitable for large scale projects, and, furthermore, that the larger the project the more necessary it is to be agile. I shall echo the author’s approach of going through the points of the agile manifesto, and shall attempt to refute his arguments regarding them.

Responding to Change over Following a Plan

Regarding this point, I agree with much of what the author says. Changes are certainly easier to implement in small scale projects and following a plan is always better than having a lack of direction. However, just because change is harder to implement in a large scale project, this does not mean it is any less necessary. Indeed, in a large scale project changes will often be far more necessary than in a smaller project.

This is largely due to the length of time large projects take to develop. Whilst a small scale project could reach completion in weeks or months, a large scale project will often stretch on for years. This means that the landscape will be very different when the project reaches completion than it was when the project was envisaged; a new competitor could emerge, or a new technology could come to the fore and need to be utilised (think of the advent of touch screens and smartphones). Change is necessary.

The author does acknowledge this, however I think he somewhat underestimates it. He states “…changes to a requirements specification or a request to modify the behaviour of a range of already-implemented features mid-way through a project’s development I find to be less acceptable.”. This is unrealistic. It is an accepted fact in software development that requirements gathering is hard. A client is unlikely to know the complete list of requirements at the onset of a project, details will be lost in communication between client and developer, and thus requirements are going to change throughout the project. Perhaps in a small scale project all the requirements could be captured perfectly and a perfect plan could be made from the offset, but in a large scale project the chances of this happening are very small, and it is more likely that changes will be required.

The agile manifesto recognises this and prioritises responding to these changes over any original plan made. I believe this is imperative to project success, especially in large scale projects where changes are more likely to be necessary, and to have a greater degree of necessity.

Working Software over Comprehensive Documentation

I think the author misses the point somewhat in relation to this section of the manifesto. Sure, lack of formal documentation could appear unprofessional to clients. Sure, informal means of documentation aren’t the best. But hold on, let’s consider the converse of this point in the manifesto.

Imagine prioritising documentation over working software – you would be setting yourself up for failure! The ultimate goal of any software house is, surely, to produce working software. This being the case, surely said software should be the absolute priority during the development process. Anything else taking priority seems to me to be simply ludicrous.

This isn’t to say that documentation isn’t important, of course it is! Adopting an agile methodology doesn’t mean sacrificing all forms of documentation – documentation will be produced when it is useful for documentation to be produced. This point in the agile manifesto is simply pointing out that the software takes ultimate priority – it is almost poking fun at more traditional forms of development, which produce vast amounts of documentation, not all of which is useful, and often do not produce working software. The Oscar Wilde quote “The bureaucracy is expanding to meet the needs of the expanding bureaucracy.” is illustrative here, traditional methods, the waterfall model for example, often produce a vast amount of documentation, for the sake of having documentation. Conversely, with an agile development style, documentation will be produced for the sake of having the software work.

Again, this is even more applicable to large scale projects. The larger the project, the more documentation can be produced. It is even more imperative that this documentation all be useful, or else anything that is not useful will be lost amongst the massive amounts of junk documentation.

PAUSE: Misinterpretations

I believe the author has, to some degree, misinterpreted the agile manifesto.

While the agile manifesto does prioritise the items on the left, it does not forget about the items on the right. I think the author has a misconception that the items on the right are ignored – they aren’t. They simply won’t be done to the detriment of the items on the left.

The author is not alone in his interpretation of the agile manifesto, however. Some developers claiming to be agile often forget about the items on theright, to varying degrees. This is one criticism I have of the agile manifesto – it is quite vague and often misunderstood. That being said, it is my belief that if the items on the right are ignored, then the developers aren’t being agile, they’re just doing it wrong.

Continuing on…

Individuals and Interactions over Processes and Tools

I believe the author misinterprets this point to some degree. He talks a lot about innovation, and how this can hamper projects. I’m not sure I agree with this, but I also think it is beside the point.

Another quote (this one, as far as I know, not attributed to anyone famous), “A fool with a tool is still a fool”. The agile manifesto believes it is more important to have the right people for the job than it is to have the right tools, and I totally agree with this. The tools are important, sure, but the people have to be able to use them.

Even more important than having the right people, though, is having them talk to each other. Interaction between team members is crucial to the success of any project, be it small or large. It doesn’t even have to be a software development project. Communication is always key. It is absolutely reasonable, in my view, to prioritise this above anything else. Work will get done faster, better, with teams of good people communicating well with one another, even if they have to make do with last years tools.

Customer Collaboration over Contract Negotiation

In this section the author makes a similar argument to what he did when arguing for following a plan over responding to change – namely, that change is bad especially in large scale projects. I feel I have already argued against this, and have shown that change is necessary, regardless of whether it is good or bad.

With changes being a necessity, it becomes vitally important that we work as closely as possible with the customers to making sure we are effecting the right changes in the right ways.

The key point here is that a contract should never substitute for communication, with a contract being probably outdated with regards to requirements (especially in the case of large scale projects), where more communication is rarely a bad thing. The author also talks about developers being exploited when changes are asked for, and that an iron cast contract would protect developers. This is true, to an extent, but it could also doom the project to failure, as change won’t be adapted to.
Additionally, this is not the only way to protect developers. Rather than have one massive iron cast contract at the beginning of the development cycle, the developers could be protected by a sequence of smaller, iterative contracts, which whilst retaining the protection of the developers, also has scope for adapting to change.


I think the author of the article makes some good points for the importance of the items on the right, but he does not convince me that they are more important than the items on the left. I think the author does not go into enough detail and underestimates the import of some the items on the left.

Furthermore I think the authors interpretation of the manifesto is often incorrect, and that this limits his argument. He talks a lot of change in large-scale projects being bad, and while this can often be the case, change is all to often necessary. Agile development practices have been developed to better accommodate the changes that are necessary for project success, where earlier development models were too rigid. The author also relies on getting an absolute list of requirements early on in the development process, which I think is an unreasonable expectation.

I believe the agile manifesto assigns priorities in an entirely reasonable fashion, and that it is absolutely suitable for large scale projects. Further to this, I believe that large scale projects can draw even more from using agile methods than small scale projects can.

Repercussions of the Cloud

Recent years have seen a large rise in the use of cloud based systems, with the vast majority of companies claiming some form of cloud usage. Cloud Computing has many advantages to offer businesses; from cost savings through to the ability to have truly elastic resources manage demand volatility.

This blog will explore some of the repercussions of Cloud Computing, and the impact they have had/may have on software architecture, process and management.

What is the Cloud

Cloud computing refers to both applications delivered as services over the Internet, and the hardware and systems software in the data centres that provide those services [1].  There are several cloud computing service models, including SaaS, PaaS and IaaS.

SaaS, Software as a Service, provides access to application software. Examples of this include Google Apps and Microsoft Office 365.

PaaS, Platform as a Service, provides a complete computing platform upon which you can run your own software. A typical platform may include things such as an operating system, programming language execution environment, database, web server etc. Examples of this include the Amazon Web Services Elastic Beanstalk, and the Google App Engine.

Finally IaaS, Infrastructure as a Service, provides physical resources, such as machines (physical but most often virtual), storage, servers etc for use by the user. Examples include Windows Azure and Rackspace.

Why use it?

Cloud computing can offer several key advantages over more traditional approaches to computing, some of which include:

-Scalability: Cloud infrastructures can give the illusion of having infinite resources, which means you can make REALLY big stuff. [1] gives a good example of this: organizations that perform batch analytics can use the “cost associativity” of cloud computing to finish computations faster: using 1,000 EC2 machines for one hour costs the same as using one machine for 1,000 hours.

-Elasticity: This means that available resources can be increased or decreased elastically to respond to demand. This can be a HUGE advantage, because it means you don’t need to know how big your stuff is going to be beforehand!

Another example from [1] to illustrate this: when Animoto made its service available via Facebook, it experienced a demand surge that resulted in growing from 50 servers to 3,500 servers in three days. The elasticity provided by using a cloud based infrastructure was key to the service being able to function, as without the allocation of extra resources the service would have been unavailable. Additionally, when demand began to drop again after the initial surge, resources could be dropped such that the expenditure on the system more closely matched the workload.

-Accessibility: You can access your stuff anywhere you have an internet connection, which is most places, nowadays.

-Cost: Cloud computing can be a lot cheaper than whatever you were doing before. This is in part due to elasticity and being able to allocate/de-allocate resources, and in part due to the fact it costs big vendors, such as Amazon or Google, less money per resource than it would cost you.

How does this impact SAPM?

With Cloud computing being used so prolifically, it has affected software architecture, processes and management significantly, and will continue to affect in into the future.

Project Management

Project management is something that has been affected by the rise of cloud computing. A lot of applications are increasingly calling for forms of Agile Development, and cloud based software can benefit from this to an even greater degree than normal. Software requires continuous updating; bugs need to be fixed, security holes need to be patched, and new features need to be added. In a cloud setting, this can be done much more easily, as rather than have each user download an update or patch, the developer simply updates the software hosted on the cloud and when a user uses it the software has been updated

Let’s use Facebook as an example. Whenever you load Facebook, you are likely to load a different version of Facebook from the last time you used it as it is constantly under development. New features are very frequently (sometimes to the extreme annoyance of users). Interestingly, the version of Facebook your browser loads is not guaranteed to be the very most up to date version of Facebook.

Flipped around, you could say that agile development is essential to cloud based systems, as if you have the ability to respond to user demands this quickly, and don’t do it when a competitor does, you will instantly be at a competitive disadvantage. This was less the case when software came packaged, as the user had already bought the software, however under a pay per use model a developer must keep their product competitive at all times.

Capital Investment

One limiting factor in the past for many projects was that they needed a large up-front investment to get them off the ground, whether this be due to having to physically package and ship their product, or having to plan for worst case loads on a web based system.

With the ability to provide software as a service, the up-front investment on shipping and packaging products has vastly diminished, enabling to such projects to go ahead. Projects which previously had to plan for worst case load scenarios can now take advantage of the elasticity offered by cloud vendors to offer their services without having to expend money for resources which will be under-utilised.

Challenges to the Software Engineer

Software engineers now face many new challenges when working on cloud based projects. Firstly, it is typical for cloud based software to be accessed through a web browser. This in itself poses a challenge to developers, as it reduces the number of tools at their disposal. [2] explains this well: ‘The traditional window and menu layer of modern operating systems has been fine-tuned over decades to meet user needs and expectation. Duplicating this functionality inside a Web browser is a considerable feat. Moreover it has to be done in a comparatively impoverished development environment. A programmer creating a desktop application for Windows or one of the Unix variants can choose from a broad array of programming languages, code libraries, and application frameworks; major parts of the user interface can be assembled from pre-built components. The equivalent scaffolding for the Web computing platform is much more primitive.’. This is an ongoing challenge for developers.

Additionally, developers now need to build their software to deal with the infrastructures used by cloud vendors. This involves handling failures in hardware, and building software that works well in a distributed environment. This had led to the rise of new architectural  patterns, such as Google’s Map-Reduce architecture.


In conclusion, cloud computing has had, and will continue to have, a big influence on the area of software architecture, processes and management.





[1] A View Of Cloud Computing. Michael Armbrust, Armando Fox, Rean Griffith, Anthony D. Joseph, Randy Katz, Andy Konwinski, Gunho Lee, David Patterson, Ariel Rabkin, Ion Stoica, and Matei Zaharia. 2010. Communications of the ACM 53, 4 (April 2010), 50-58.

[2] Cloud computing. Brian Hayes. 2008. Communications of the ACM 51, 7 (July 2008), 9-11.