If you keep updated with blog posts from other programmers out there, you’ll probably be thinking: “Oh no, not another post from some functional-programming fan boy”. And you’re probably right! There have been many people who have been writing about functional programming recently. But this just highlights the general opinion that there is actually something useful to be taken from this age-old paradigm that has never really taken off in the industry (check out this xkcd comic ).
Instead of talking about why everyone should be using functional programming because it’s so awesome, I’ve split up this post into sections where I highlight certain parts of large project development where functional programming may be of use and possibly some cases where it’s BETTER THAN EVERYTHING. I’m also not going to mention specific cases of where bank A used functional language B, or comparisons between functional languages and strengths of lazy evaluation or different types of typing. I’ll stay at a high level (of a manager, per se), and focus on factors influencing large scale development.
Small aside: I do favour the pure functional language Haskell in this blog. Firstly, it was one of the first true exposures to programming that I had, and of course, anyone who went to the University of Edinburgh as an undergraduate will forever remember Haskell as the language that Philip Wadler was so incredibly enthusiastic about, making use of the memorable “stupid computer” voice to explain how the processor goes through computation. Finally, it’s a pure-functional language, meaning that you are not at all exposed to the potential of doing things in a different way, as you would get in a multi-paradigm language. You have to stick with doing things in a functional manner and there is no chance for you to go back to an OOP way of doing things where you might be tempted by “it’s just simpler to understand”.
A lot of programming is about finding suitable data structures that can hold the information that you need to process in the program. In this ancient interview with Bill Gates , he mentions “The most important part of writing a program is designing the data structures”. Functional programming takes a different approach: use only a few highly-optimized data structures such as map, set and lists. You take customizable higher-order functions that will plug into the structures as you see fit. Using these structures means that their methods are well defined and known. This not only reduces the code size (as you are not writing specific and unique methods with which your data structure can be manipulated), but also makes the code easy to read. Reading this other code, you instantly recognize what is a certain method does, because everywhere you look you see the same data structures. This removes the notion of having to understand a data structure in its entirety before diving into the code logic, which can automatically be seen as a positive.
There’s also this notion of Haskell reducing your code size significantly (as much as 90%). Whether this is an exaggeration by a factor of two or an order of magnitude, smaller code size can equate to less to read and therefore less to understand. Of course, the complexity of the code is related to the time taken to understand it, and I do agree with the fact that to us mere mortals understanding things in a procedural manner (this happens, something changes, now test for equality) is simpler than trying to understand things in a mathematical way. But that’s another topic. One thing that’s generally accepted as true is that Haskell can reduce your code size dramatically.
The use of functional languages to effectively make programs scalable to multiple cores is probably the main reason why Haskell and other functional languages have garnered so much attention in recent years. Functional languages work on immutable data structures and with the shift from increasing a single-core processor’s clock speed to multi-core processors due to the power wall, a language that can automatically expose the code to parallelization is a huge bonus. Functions work on these data structures that cannot alter their state after creation. This removes the need for all processors to make sure that changes in state have to be known. As mentioned in this blog post , Clojure is a great example where your code is parallelized without the programmer specifying anything or worrying about locks or other “nasties” that arise through parallel programming. Libraries have rewritten the map function so that is automatically parallelized. Every time a programmer uses map, they benefit from parallelization.
Code reuse & legacy code:
John Carmack, ex-technical director at idSoftware, makes a great case  of a real work example where a functional style of programming has made maintaining code significantly easier. Some parts of a build system relied heavily of state changes with callbacks everywhere that alter the system’s state. This caused pain on an almost weekly basis, where another piece of code he found was written in a purely functional way. This stateless paradigm may be more complex to design for, but when programming maintainable and future-proof code, the simple idea of having input being transformed to some output regardless of system state, where you do not care what is happening in the transformation stage, reduces the number of places where errors and bugs can appear.
For languages like Java or C#, the level of reuse is at the framework level. As mentioned before, this requires a deep understanding of the data structures and how the API was structured if you really want to know what’s going on underneath. Functional languages have this reuse at a more granular level, using the fundamental data structures and higher-order functions to customize how you use the structures. If you’re using something that was implemented in a functional manner, you’d probably be correct in assuming that certain data structures are being used.
Structuring code at an architecture level:
So the first question that most people ask when they think functions and programming is “how am I going to structure my code at a high level?” In reality, regardless of what language or paradigm you are using, the high level involves the same thing: choosing different classes/modules, who calls what, how to split a big task into manageable chunks that can be represented with pseudo-code, and so on. Only when you get to the low-level of actually writing the classes or modules do you get a real chance of thinking in a different manner (OOP vs FP). Architectural patterns like MVC, client-server, multi-tier architecture exist in the same sense with functional languages, as these patterns are less concerned with how the individual snippets of code function and more about how the components of a system should be structured. Of course, people are familiar with doing it in a OOP way, but tradition should never be used to reason your way out of change (otherwise we’d still be burning witches).
Anyway, Haskell uses modules where you can choose which functions to export, just like private and public methods in classes (Java, C#, C++) or header files (C, C++). Plus, OOP principles for clean code like the single responsibility of a method is easily achievable by structuring modules and functions so that they only focus on one thing, sort of like high cohesion in individual classes.
OOP languages shifting towards functional paradigms:
A lot of OOP languages have begun to incorporate functional programming concepts. Both Java 8 and C++11 have lambda expressions in them, and C# 3.0 introduced the LINQ component. This a sign that even in the OOP paradigm, there are enough cases where functional programming can be used as an aid to simplify and produce more readable code (especially in the case of LINQ). You can find a large amount of examples here . My favourite by far here are the ones that reduce large for loops into single line statements which is a great example of reducing code size.
OOP patterns in functional languages:
Countless OOP patterns exist in order to help solve a particular problem by providing a conceptual framework on which to design solutions. But some of these patterns are made redundant in the case of functional programming. The Command pattern  is a perfect example. It provides an interface with a single method: Execute. But this looks very suspiciously like a mathematical function, and the OOP paradigm has to encapsulate it in a class. With functional programming, a programmer gains immediate exposure to this pattern without even realizing it. Similar cases can be made for patterns like Observer  and Strategy .
In C#, it used to be the norm that you created delegates (function pointers) for certain types of events, but nowadays function types such as lambda expressions or delegates like “Func” do the same thing without a programmer having to write a specific method, with a body placed somewhere in code for the delegate.
Additionally, here’s an interesting piece of information: Peter Norvig  found that 16 out of the 23 patterns in the famous Design Pattern book are “invisible or simpler” when using Lisp. So are design patterns signs of where a programming language is failing to help solve a solution rather than effective ways to work out a solution of a problem?
Okay, admittedly this post has been at a very high level. A lot of specifics from actual functional programming languages and benefits of these languages have been left out to concentrate on a different message. Programming large projects is a time consuming process, with many developers and managers all interacting to ensure that it is architected in the correct way, it delivers on all the functionalities from the specification, and that it is maintainable for the future. Making a case to your boss or shareholders, that you will abandon the working OOP paradigm in favour of something else, has to make sense to all people, may they be technical or not. To tech-aware managers, buzz words like parallelization, smaller code size and development time, system architecture, and using proven patterns, is what will keep them listening and eager for change. So to anyone reading this expecting a technical comparison, I apologize profusely and point you in the direction of these outstanding writings (, ).