Legacy code: revisiting that OpenGL API

In this blog post we will be concerning ourselves with one of those areas in large-scale and long-term project development that no one really wants to talk about: the legacy code base. There’s a good reason why this is generally an avoided topic, just in the same way that open source developers would avoid testing: it’s not as shiny, interesting, new or awesome as the cool features that can be added instead.

Disclaimer: I’ll be focusing on updating your old OpenGL code and the benefits that can be drawn from that. Having a large scope, such as “Legacy code should be worked on!” can be very interesting but it’s harder to actually make a point when generalizing and so it’s all OpenGL from here on in. Furthermore, a large amount of this post has been derived from personal experience. The opinion that I have is from looking at and improving the legacy code that I have seen and as I haven’t seen every single code base that may contain legacy code, your opinion may vary.

What is legacy code?

The term ‘legacy’ has been used for all sorts of scenarios where some code is old or outdated. Even amongst programmers, it seems like a common definition is rare to find [13][14][15][16]. Therefore when I am talking about legacy code, I will be talking about the code that is still in the system, won’t be removed for the foreseeable future, serves an important role (such as backwards compatibility) and no one can be bothered to look into as there are more interesting things to be doing. As an aside, I understand deprecated to mean that a mistake was made, there is a newer and shinier way of doing it, and this way of doing things will be removed.

 

OpenGL and legacy code

So pretty much every code base in a long lasting system will probably have some legacy code. It might be due to some backwards compatibility issues that an operating system must have. It might be due to the unwillingness for coders to look into an area of code that “just works” and has been working as long as everybody can remember. I will be focusing on the latter because of the role that OpenGL plays in that part.

OpenGL (short for Open Graphics Library)  is an API for rendering 2D and 3D graphics, often with the use of hardware acceleration [1]. For a long time, it has been considered the competitor [2] to Direct3D [5] (the 3D part of the DirectX APIs), however, with the decline of computers running Windows (the only platform on which you can use Direct3D), and the increase of users using Unix-based systems that all support OpenGL (or a subset of it in the case of the mobile world), OpenGL is the go-to solution for everything requiring 3-dimensional graphics, that isn’t gaming. But in our interest, it is also an API that has existed since 1992 [4] and thus has seen some significant changes in the way that programmers should be using this API. This is exactly what makes it interesting when applied to legacy code bases, as the old and outdated way of doing things, have been superseded by a cleaner and faster solution. Yet because no one is looking at this code because it’s old and it just works, programmers, users, and the companies themselves are missing out on experiences and performance gains that can be achieved by just doing a little bit of plumbing.

 

Performance difference

A great example of the kind of performance difference that is achievable is the difference between immediate mode OpenGL and using VBOs.

Immediate mode can easily be identified by the intuitive nature of the code. It is a procedural and step-by-step way of drawing objects in a world. Before you want to start drawing you call the glBegin() function with the parameter GL_TRIANGLES (or whatever it is you want to be drawing). To stop drawing objects, you just need to call the glEnd() function. In between is where you actually draw the objects (triangles in our case). So if you have 10,000 triangles, then there will be 10,000 calls to a function that will draw the triangle.

The problem with immediate mode is that it is directly bounded by the CPU of the program. As these individual triangle draw calls are instructions that are executed on the CPU before being sent to the GPU to be drawn, your CPU automatically becomes the bottleneck. What is generally really bad about this is that GPUs are really good at drawing stuff. The reason why most computers contain a dedicated piece of hardware just for drawing pixels on a screen is because the very nature of graphics is not suited to the SISD [9] (Single-Instruction Single-Data) architecture of a processor. So limiting the amount of items that you can draw in your program by your processor when you have a graphics card that can handle significantly more, just seems like a bit of a waste, especially when you could be using the processor’s resources for something else. As a note on the amount of floating point operations per second (FLOP/s) [11]: Nvidia’s GTX280 provides 0.9 Tflop/s while Intel’s Core2Quad CPU only delivers around 0.1 Tflop/s [12].

Since 2003 and the release of OpenGL 1.5 [4], there has been a new way of doing things in OpenGL-land. That’s not to say that immediate mode was deprecated, but the concept of buffer objects was introduced for graphics programmers. Buffer objects are objects that are stored on the graphics card and can be used for draw calls. A simple example, and a common use-case,  is to store all of the vertex data in the buffer object. Creating the buffer object and binding it means that afterwards you can draw all of the triangles contained in that buffer object with a single CPU draw call. The aforementioned example drew 10,000 triangles by having 10,000 draw calls from the CPU. The buffer object can draw 10,000 triangles using a single CPU draw call. Ultimately, this is analogous to the CPU saying “draw all those triangles I told you about” instead of “draw this triangle, and this triangle, …”, which will allow the GPU to do some work while the CPU can continue on something else.

Now since 2008 and the release of OpenGL 3.0 [4]. the immediate mode, amongst over things, has been deprecated. However, I believe that due to the age of the API and the amount of applications that are using OpenGL out there, the OpenGL ARB [3] (Architecture Review Board) cannot remove the functionality without a huge uproar from the industry. Applications whose very core for rendering contains immediate mode OpenGL in the legacy code base.

 

Do APIs always change drastically?

OpenGL is a special case. Most APIs just use the typical semantic versioning (major.minor.path) [10] to signify large changes in the application. If functions have been superseded, then just mark them as deprecated and give all API programmers a warning that this functionality will be removed in 2 or so years time. But due to the longevity of the API, and the amount of long-term applications that rely on this API for 3D rendering, it becomes harder and harder to remove functionality. A similar case can be seen in the Windows API where the API had several functions [7] that were specifically aimed at making life easier for applications going from 16-bit applications to 32-bit, such as long pointers [8]. Now that that’s ancient history, there are still redundant functions, or parameters in functions, that are for that 16-bit conversion. Similar cases exist with the x86 instruction set architecture and their policy for backwards compatibility [6]. Longevity and APIs seem to not go well together.

Cost of rework is too great

But back to the point of legacy code. Why don’t companies and programmers invest more resources in making sure that the legacy code base is up-to-date? I think in lots of cases it is just simply a matter of why bother changing anything that currently works just fine. Why bother wasting resources on maintaining something that already works when those resources could be spent on the new features that will make more people buy the software, thus generating more money. However, this simplistic mindset is exactly what will keep people from seeing what is wrong with an application. A simple example can be illustrated with graphics as the domain: if application A draws something at a low frame rate and people just take it as a given that this is the way that things are, then only when application B comes in and shows that with simple changes the frame rate can be increased dramatically, does application A realize that there is room for improvement and in the process has lost all of its customers to application B. Now in a general case for legacy code, you will have may have no idea what part of the code can be improved so that this doesn’t happen, but with OpenGL, it is known that the immediate mode is significantly slower, yet legacy code bases still use it. Is it the case that in a large-scale project, it is just too much work to change the legacy code?

Personal experience

I have very strong feelings about removing the old OpenGL in legacy code bases because of personal experience. I was working on an application where no one had looked at a portion of the code in a long time because “it works just fine and screwing it up may have major consequences”. After spending quite some time looking through the code, I found that all of the places that actually execute draw calls were all using the immediate mode OpenGL. Within months I had added simple graphics functionalities, the likes of which had not been seen before in the application, and all due to the simple fact that I removed the immediate mode OpenGL that would have made the addition of those functionalities impossible. This is the simple scenario of prototyping in an isolated space to find out if a feature is meaningful before adding it. Of course, prototyping and changing the legacy code base so that these features can be added is very different but at least prototyping will show how valuable these features are and if reworking the legacy code is worth it. But again, this is for a general case. For OpenGL, throw out that old immediate mode for instant performance gains!

 

Conclusion

The problem with longevity in computer software is that the software needs to keep up with the rapidly evolving hardware underneath. Moore’s Law makes it possible to implement a feature in a certain way that would have been really slow 2 years prior. An API has to evolve with this change, and this will mean that a certain way of doing something may not be the right way of doing that same thing in the future. Legacy code has the problem that is often untouched because it currently works just fine. However, because no one is modifying it and keeping it future-proof, it can very quickly become out-of-date, using API features that are no longer relevant. Personally, I found that throwing out the old OpenGL and using current techniques not only gave a huge boost in performance, but also allowed cool features to be added very easily. I’m not trying to blow my own trumpet here, but rather, explain how simple it is to add new and meaningful features by digging around in the legacy code base and removing all of the immediate mode OpenGL.

 

References:

  1. OpenGL – http://www.opengl.org/

  2. OpenGL – http://en.wikipedia.org/wiki/OpenGL

  3. OpenGL ARB – http://www.opengl.org/archives/about/arb/

  4. OpenGL History – http://www.opengl.org/wiki/History_of_OpenGL

  5. Direct3D – http://en.wikipedia.org/wiki/Direct3D

  6. x86 – http://en.wikipedia.org/wiki/X86

  7. Win32 API –http://msdn.microsoft.com/en-us/library/ff818516(v=vs.85).aspx

  8. Long pointers (Win32) – http://msdn.microsoft.com/en-us/library/windows/desktop/ff381404(v=vs.85).aspx

  9. Flynn’s taxonomy – http://ieeexplore.ieee.org/xpl/articleDetails.jsp?arnumber=5009071

  10. Semver –  http://semver.org/

  11. FLOP/s – http://en.wikipedia.org/wiki/FLOPS

  12. CPU  & GPU speeds – http://dl.acm.org/citation.cfm?id=1555775

  13. Legacy code – http://en.wikipedia.org/wiki/Legacy_code

  14. Legacy code – http://stackoverflow.com/questions/4174867/what-is-the-definition-of-legacy-code

  15. Legacy code – http://stackoverflow.com/questions/479596/what-makes-code-legacy

  16. Legacy code – http://programmers.stackexchange.com/questions/94007/when-is-code-legacy

“Design Patterns from a Junior Developer perspective” response article

This entry is a response to the “Design Patterns from a Junior Developer perspective” blog post written by s0954168 [1].

In the article, the author explains how junior programmers may have difficulty in understanding and using design patterns correctly. Additionally, significant portions of the article are backed by experience of the author and the attempts that were made to learn and apply these common techniques used to aid programmers solve a given problem.

Introduction

Overall, I agree with the vast majority of things said in this article. Coming from a similar background, I can appreciate the apparent problems of fully understanding the benefits of a particular design pattern that inexperience brings. However, as is implied by use of the word “Junior” in the title, I feel that the article has not focused on the major problem of truly appreciating these patterns: experience (or lack thereof).

 

Origin:

The author talks about the origin of the design patterns by mentioning “countless systems implemented in the past” and “somebody has already solved this problem for you”. While this is makes perfect sense as a logical explanation, it misses out on the finer detail. When clever solutions to existing problems have been used and this solution could be abstracted out, one could reuse this solution to discourage other people from making the same mistake and more importantly, to have a common understanding of a solution. So the formal creation of the pattern comes after someone has already used it and has determined that it is applicable in a general sense of solving a particular problem. The more people that are currently working in a particular frame of reference, the more solutions may potentially be created. Thus, the popularity of a programming paradigm could influence the amount of design patterns available.

What I am trying to say with this is that design patterns aren’t a completed set of ways to solve a problem. More patterns will be created, especially if other programming paradigms become more popular, meaning that the existing patterns may be tailored to existing problems. The problem with a junior developer is the potential ignorance that will be exposed by the individual by assuming that every problem can be solved with an existing technique.

A related problem is that because of this perceived completeness, inexperienced programmers may try and fit their problem to a pattern, which is the exact opposite of the intended purpose of a design pattern.

I know I am guilty of this. I think most beginner programmers starting out in larger projects, that could benefit from design patterns, are guilty of this too. I love this quote from an answer on stackoverflow [2] which sums up both the opinion of the author and myself: “Novice programmers don’t use design patterns. They abuse design patterns.”

 

Design patterns example

In the article, the author gives an example of a single design pattern: the Singleton. For readers to completely appreciate what the author is talking about in terms of the complexity of understanding where to use a given pattern, I believe an additional pattern would have been a welcome addition. The singleton object is the simplest one, and adding others like Dependency Injection or the Bridge pattern may have been helpful in cementing the case that is the difficulty in knowing when and where to use them.

 

When to use design patterns?

In this section, the author talks about knowing which pattern to use, as well as knowing when not to use a pattern. As the author mentions, using the patterns in every possible situation may lead to increased code and architectural complexity; the one thing that use of a pattern is trying to hinder. Contradictingly, the author mentions that refactoring is a “perfect example” of when you might be able to insert a pattern. I say contradictingly because when refactoring, the goal is generally to clean up a particular solution. Adding a pattern may needlessly introduce more complexity to a solution. However, this may be a semantic issue; I classify a solution that needs alteration in the entire logic (which is what a design pattern may end up doing) as rewriting as opposed to refactoring. Nevertheless, adding patterns when the intent is to tidy up a solution, seems contradictory and is a point where I disagree with the author.

 

Experience:

The one case that I feel the author has failed to mention, and I’ve alluded to earlier, is that experience with these patterns and solving architectural problems is the key to understanding when to use a design pattern. Of course, the title and the article is aimed at junior developers and therefore less experienced individuals. However, the case is never made explicitly that experience matters and so it may be hard for a reader to exactly see why a junior  developer may have more trouble in appreciating the use and abuse of design patterns. Peter Norvig [4] claims in his blog post that just like any other discipline, 10000 hours of work is required before you truly know the domain. A derived conclusion of this is that a programmer needs to spend the same amount of time on general architecture before being able to recognize the applicability of a design pattern for a given problem.

 

Conclusion:

Even though the blog post is about how junior developers can make use of the design patterns, the article could benefit from talking in greater detail about how design patterns help solve a particular problem. Understanding exactly how a particular design pattern applies to a problem, will not only allow a reader to see the difficulty in choosing a pattern, but also why an inexperienced programmer may abuse them. All in all, I agree with the author on grounds of personal experience (I also attempted to use a design pattern for EVERY problem I encountered) and would like to commend the author for writing a post on a topic that every novice programmer should have on the frontiers of their thought when starting out in large-scale project development..

 

References:

  1. https://blog.inf.ed.ac.uk/sapm/2014/02/14/design-patterns-from-a-junior-developer-perspective/

  2. http://programmers.stackexchange.com/questions/141854/design-patterns-do-you-use-them

  3. http://stackoverflow.com/questions/978489/how-important-are-design-patterns-really

  4. http://norvig.com/21-days.html

  5. http://en.wikipedia.org/wiki/Dependency_injection

  6. http://en.wikipedia.org/wiki/Bridge_pattern

Funtional programming in large-scale project development

If you keep updated with blog posts from other programmers out there, you’ll probably be thinking: “Oh no, not another post from some functional-programming fan boy”. And you’re probably right! There have been many people who have been writing about functional programming recently. But this just highlights the general opinion that there is actually something useful to be taken from this age-old paradigm that has never really taken off in the industry (check out this xkcd comic [1]).

Instead of talking about why everyone should be using functional programming because it’s so awesome, I’ve split up this post into sections where I highlight certain parts of large project development where functional programming may be of use and possibly some cases where it’s BETTER THAN EVERYTHING. I’m also not going to mention specific cases of where bank A used functional language B, or comparisons between functional languages and strengths of lazy evaluation or different types of typing. I’ll stay at a high level (of a manager, per se), and focus on factors influencing large scale development.

Small aside: I do favour the pure functional language Haskell in this blog. Firstly, it was one of the first true exposures to programming that I had, and of course, anyone who went to the University of Edinburgh as an undergraduate will forever remember Haskell as the language that Philip Wadler was so incredibly enthusiastic about, making use of the memorable “stupid computer” voice to explain how the processor goes through computation. Finally, it’s a pure-functional language, meaning that you are not at all exposed to the potential of doing things in a different way, as you would get in a multi-paradigm language. You have to stick with doing things in a functional manner and there is no chance for you to go back to an OOP way of doing things where you might be tempted by “it’s just simpler to understand”.

Code size:

A lot of programming is about finding suitable data structures that can hold the information that you need to process in the program. In this ancient interview with Bill Gates [8], he mentions “The most important part of writing a program is designing the data structures”. Functional programming takes a different approach: use only a few highly-optimized data structures such as map, set and lists. You take customizable higher-order functions that will plug into the structures as you see fit. Using these structures means that their methods are well defined and known. This not only reduces the code size (as you are not writing specific and unique methods with which your data structure can be manipulated), but also makes the code easy to read. Reading this other code, you instantly recognize what is a certain method does, because everywhere you look you see the same data structures. This removes the notion of having to understand a data structure in its entirety before diving into the code logic, which can automatically be seen as a positive.

There’s also this notion of Haskell reducing your code size significantly (as much as 90%). Whether this is an exaggeration by a factor of two or an order of magnitude, smaller code size can equate to less to read and therefore less to understand. Of course, the complexity of the code is related to the time taken to understand it, and I do agree with the fact that to us mere mortals understanding things in a procedural manner (this happens, something changes, now test for equality) is simpler than trying to understand things in a mathematical way. But that’s another topic. One thing that’s generally accepted as true is that Haskell can reduce your code size dramatically.

Parallelizing:

The use of functional languages to effectively make programs scalable to multiple cores is probably the main reason why Haskell and other functional languages have garnered so much attention in recent years. Functional languages work on immutable data structures and with the shift from increasing a single-core processor’s clock speed to multi-core processors due to the power wall, a language that can automatically expose the code to parallelization is a huge bonus. Functions work on these data structures that cannot alter their state after creation. This removes the need for all processors to make sure that changes in state have to be known. As mentioned in this blog post [16], Clojure is a great example where your code is parallelized without the programmer specifying anything or worrying about locks or other “nasties” that arise through parallel programming. Libraries have rewritten the map function so that is automatically parallelized. Every time a programmer uses map, they benefit from parallelization.

Code reuse & legacy code:

John Carmack, ex-technical director at idSoftware, makes a great case [9] of a real work example where a functional style of programming has made maintaining code significantly easier. Some parts of a build system relied heavily of state changes with callbacks everywhere that alter the system’s state. This caused pain on an almost weekly basis, where another piece of code he found was written in a purely functional way. This stateless paradigm may be more complex to design for, but when programming maintainable and future-proof code, the simple idea of having input being transformed to some output regardless of system state, where you do not care what is happening in the transformation stage, reduces the number of places where errors and bugs can appear.

For languages like Java or C#, the level of reuse is at the framework level. As mentioned before, this requires a deep understanding of the data structures and how the API was structured if you really want to know what’s going on underneath. Functional languages have this reuse at a more granular level, using the fundamental data structures and higher-order functions to customize how you use the structures. If you’re using something that was implemented in a functional manner, you’d probably be correct in assuming that certain data structures are being used.

Structuring code at an architecture level:

So the first question that most people ask when they think functions and programming is “how am I going to structure my code at a high level?” In reality, regardless of what language or paradigm you are using, the high level involves the same thing: choosing different classes/modules, who calls what, how to split a big task into manageable chunks that can be represented with pseudo-code, and so on. Only when you get to the low-level of actually writing the classes or modules do you get a real chance of thinking in a different manner (OOP vs FP). Architectural patterns like MVC, client-server, multi-tier architecture exist in the same sense with functional languages, as these patterns are less concerned with how the individual snippets of code function and more about how the components of a system should be structured. Of course, people are familiar with doing it in a OOP way, but tradition should never be used to reason your way out of change (otherwise we’d still be burning witches).

Anyway, Haskell uses modules where you can choose which functions to export, just like private and public methods in classes (Java, C#, C++) or header files (C, C++). Plus, OOP principles for clean code like the single responsibility of a method is easily achievable by structuring modules and functions so that they only focus on one thing, sort of like high cohesion in individual classes.

OOP languages shifting towards functional paradigms:

A lot of OOP languages have begun to incorporate functional programming concepts. Both Java 8 and C++11 have lambda expressions in them, and C# 3.0 introduced the LINQ component. This a sign that even in the OOP paradigm, there are enough cases where functional programming can be used as an aid to simplify and produce more readable code (especially in the case of LINQ). You can find a large amount of examples here [18]. My favourite by far here are the ones that reduce large for loops into single line statements which is a great example of reducing code size.

OOP patterns in functional languages:

Countless OOP patterns exist in order to help solve a particular problem by providing a conceptual framework on which to design solutions. But some of these patterns are made redundant in the case of functional programming. The Command pattern [15] is a perfect example.  It provides an interface with a single method: Execute. But this looks very suspiciously like a mathematical function, and the OOP paradigm has to encapsulate it in a class. With functional programming, a programmer gains immediate exposure to this pattern without even realizing it. Similar cases can be made for patterns like Observer [14] and Strategy [13].

In C#, it used to be the norm that you created delegates (function pointers) for certain types of events, but nowadays function types such as lambda expressions or delegates like “Func” do the same thing without a programmer having to write a specific method, with a body placed somewhere in code for the delegate.

Additionally, here’s an interesting piece of information: Peter Norvig [17] found that 16 out of the 23 patterns in the famous Design Pattern book are “invisible or simpler” when using Lisp. So are design patterns signs of where a programming language is failing to help solve a solution rather than effective ways to work out a solution of a problem?

Summary:

Okay, admittedly this post has been at a very high level. A lot of specifics from actual functional programming languages and benefits of these languages have been left out to concentrate on a different message. Programming large projects is a time consuming process, with many developers and managers all interacting to ensure that it is architected in the correct way, it delivers on all the functionalities from the specification, and that it is maintainable for the future. Making a case to your boss or shareholders, that you will abandon the working OOP paradigm in favour of something else, has to make sense to all people, may they be technical or not. To tech-aware managers, buzz words like parallelization, smaller code size and development time, system architecture, and using proven patterns, is what will keep them listening and eager for change. So to anyone reading this expecting a technical comparison, I apologize profusely and point you in the direction of these outstanding writings ([12], [8]).

References:

  1. http://xkcd.com/1312/
  2. http://programmers.stackexchange.com/questions/122437/how-to-organize-functional-programs
  3. http://stackoverflow.com/questions/2835801/why-hasnt-functional-programming-taken-over-yet
  4. http://www.techrepublic.com/blog/software-engineer/when-to-use-functional-programming-languages-and-techniques/
  5. http://www.ibm.com/developerworks/library/j-ft20/
  6. http://lorgonblog.wordpress.com/2008/09/22/how-does-functional-programming-affect-the-structure-of-your-code/
  7. http://www.cs.kent.ac.uk/people/staff/dat/miranda/whyfp90.pdf
  8. http://programmersatwork.wordpress.com/bill-gates-1986/
  9. http://functionaltalks.org/2013/08/26/john-carmack-thoughts-on-haskell/
  10. http://www.javaworld.com/article/2078610/java-concurrency/functional-programming–a-step-backward.html
  11. http://msdn.microsoft.com/en-us/library/dd293608.aspx
  12. http://www.defmacro.org/ramblings/fp.html
  13. http://en.wikipedia.org/wiki/Strategy_pattern
  14. http://en.wikipedia.org/wiki/Observer_pattern
  15. http://en.wikipedia.org/wiki/Command_pattern
  16. http://www.ibm.com/developerworks/library/j-ft10/
  17. http://www.norvig.com/design-patterns/
  18. http://igoro.com/archive/7-tricks-to-simplify-your-programs-with-linq/