In 1984, George Orwell wrote about the dangers of absolute political authority in an age of advanced technology. In a somewhat less politically charged piece, I aim to look at how the advances in computing technology might influence the future of software development .
I will argue that a key trend in software development is that code is shifting from being efficient to being understandable and that this has been possible because of increases in hardware resources. I extrapolate this to the future and argue that increasing hardware resources will lead to further abstractions and that we could see use of programming languages as we know them today decline, with more visual methods of programming becoming prevalent.
Moore’s Law
Moore’s Law tells us that the number of transistors on integrated circuits doubles approximately every two years.
Or it used to.
For the purposes of this blog I’m going to conveniently ignore the fact that we are reaching the limits of what we can do with silicon. I’m going to ignore that Moore’s Law is slowing down and that some people think it’s end is nigh. Arguably it’s been dead for a while anyway, with computer architects redefining it to suit what they thought was more realistic. I am going to trust that humankind will do what it has always done – innovate.
I am going to assume, simply, that computers will get faster. I do not think this is unrealistic.
From punched cards
As late as the mid 1980s, programmers were still using punched cards for much of their work. To me, born in 1992 and studying computer science today, the thought of doing this is nigh incomprehensible. The strength of that word, incomprehensible, just goes to illustrate how far we have come in just 30 years (or perhaps that I am stupid, but I’d prefer to think the former).
Assuming progress continues at even a fraction of what it did in the past, the future, even the near future, will look incredibly different to the software developer. He, or she, will have vastly different technologies at his or her disposal, and vastly greater computing resources to work with.
So how might this affect the process of software development?
The architects are making us lazy, but this isn’t bad
To see how the development of computing technology might affect us in the future, let’s look at how it has affected us up until now.
One simple way of putting it is by saying that programmers are getting lazier. No, I don’t mean they spend most of their time doing nothing (well, maybe some do). Rather, that they don’t deal with things that a programmer of the last generation would have had to. We have abstracted away from a lot of the gory details of programming, and this is because of the additional resources we have gained from advances in computer hardware.
For an example of this, contrast programming a Java application with programming for an embedded system. With Java we don’t have to worry about trivial things such as memory allocation. We don’t have to know what our architecture looks like, how many registers it has, whatever. Java will do all that for us. Maybe it won’t do it as cleverly as a person could, but who cares. We have lots of all that hardware stuff, we can waste some.
I actually struggled to think of examples for that last part, and that makes the point better than any example ever could. Programmers just don’t care about the low level any more, except in specific, niche, cases. We got lazy, because we could.
Except we didn’t, not really. Our focus has simply shifted. Instead of writing uber efficient code that will run on our totally state of the art at the time yet actually remarkably rubbish computer of the past, we instead aim to write readable, modular, DRY, <insert more SAPM buzzwords here> code to run on our actually really quite good machines of the present. We care more about other people being able to understand it our code than almost anything else, after all, how else could large scale and long term projects be completed?
And I think this is the key trend that will affect the future, as well.
So can we be lazier?
Why yes, yes of course we can. And we should be, so long as the hardware can compensate for the increasing levels of inefficiency.
‘How could we be lazier?’ you ask. ‘How can we abstract more?’. ‘How can we make our code more understandable?’.
The simplest way is to write less and less of the code ourselves and let the computer do it for us. This is already happening – compilers perform high degrees of optimisation and scripting languages allow us to express more and more in less and less lines of code. Compiler technology will continue to get cleverer and cleverer. As well as allowing us to use hardware resources more efficiently, compilers will get better at optimising code, allowing us to be lazier when programming it. I predict that scripting languages will become more and more prevalent, with programming languages becoming higher and higher level, to the point that they more resemble natural language than the programming languages we see today.
However I predict this will be taken a lot further, with IDEs generating code for us in bigger and bigger ways, possibly providing design patterns as off the shelf template solutions at the click of a button. Commonly programmed features should also be available at a button press.
Taking this even further, I think programming itself will become a lot more visual. A picture, or diagram, can be a lot more expressive than text, is more quickly understood. This is the purpose of UML, after all. Rather than using UML as a sketch or a blueprint, I think we will move on to using it as a programming language. GUI interfaces will be designed in WYSIWYG editors inside of IDEs.
A lot of these technologies exist already, but aren’t used because they produce relatively rubbish code, or don’t provide the right functionality. I think their usage shall increase. My argument for this is threefold:
- The code these methods produce can be made better. The tools can be improved, be it at application level or compiler level, or somewhere in between.
- We will care less that the code is rubbish, because our hardware will be awesome and our pictures will be pretty.
- With the tools being more viable, their functionality will be improved, to a point where they are comprehensive and expressive enough to be used.
With these three points coming together, there will be no reason not to use more abstracted means of programming, even ones that seem infeasible or not useful in the present day.