It’s certainly not a recent realisation that software projects are often delivered late, over-budget, or not to specification, if at all. In an attempt to address this, the “Capability Maturity Model” was proposed, with the goal of aiding management and development of long-term software projects in a disciplined and structured way; all focused around the concept of ‘maturity’.
We shall be discussing the Capbility Maturity Model Integration (CMMI; a more recent variant of the CMM), why it is harmful to the software process, and who is to blame.
How do we define the idea of ‘maturity’?
Paulk, Curtis, Chrissis and Weber (1993) define ‘maturity’, in the context of software development processes, as,
“…the extent to which a specific process is explicitly defined, managed, measured, controlled, and effective.”
They go on to note that, as an organisation gains in maturity, it “institutionalizes its software process via policies, standards, and organizational structures”. Perhaps it would be useful to contrast this with the authors’ definition of ‘immaturity’:
“In an immature organization, software processes are generally improvised by practitioners and their managers during a project.”
It certainly seems conceivable that projects which are “improvised” will very likely be mishandled with regard to the typical management triple of schedule, cost, and scope. This, however, raises the question of how one can go about attaining ‘maturity’, and what the CMMI does to facilitate this.
Maturity comes from small steps, not a giant leap.
Rather than take drastic or grand measures to improve themselves, then, Paulk et al. argue that organisations would be better off taking small incremental steps to maturity; that is to say, evolution is preferred to innovation – at least in the context of the software development process.
This is the fundamental idea behind the Capability Maturity Model; it provides a framework for organising such incremental steps, by placing them in five distinct levels. With each level comes a set of goals which facilitate the measuring and evaluating of process maturity, ultimately with the goal of increasing the process improvement. This rigid structure, however, is a major shortcoming, as we will shall discuss later.
First let us briefly define each of the five CMMI maturity levels, and outline the fundamental requirements in order to be appraised at each. Note that, in order to progress up a level, an organisation must be appraised by a CMMI appraisal officer, who examines processes, documentation and working methods within the organisation.
Level 1: “initial” (or “chaotic”)
The first level, “Initial”, is used as a basis of comparison for subsequent levels. An organisation regarded as being at level 1 on the model wouldn’t have a stable development and maintenance processes, and any success is usually attributable to certain individuals, rather than the organisation as a whole.
Level 2: “repeatable”
At the “repeatable” level, an organisation will have policies for managing a software project, with planning decisions based on results of previous projects. In a nutshell, in order to be level 2 appraised, an organisation must have installed policies which help project managers establish management processes.
Level 3: “defined”
Here an organisation will have a ‘defined’ (viz. standard) software process, which covers both software development and management processes. These must be integrated into the organisation as a whole, as appraisal depends on the organisation-wide understanding of activities, roles, and responsibilities in such processes.
Level 4: “managed”
In the “Repeatable” level, an organisation sets quantitative goals for processes, performing consistent (and well-defined) measurements of project quality against these. By this stage, produced software is of a predictably high quality, and appraisal is offered on the basis of the organisation being able to effectively measure and assess its risk and capabilities.
Level 5: “optimising”
Level 5 organisations are said to be “optimising” – that is, they focus on continuous process performance improvement, through both innovative and incremental improvements. Ultimately, certification comes from the fact that, in level 5 organisations, process improvements are planned, managed and treated in the same way as ordinary business activities.
Why use the CMMI?
The CMMI allows its users to really focus its efforts on improvement, yet still being aware of the larger scheme of things. By mandating strict documenting of processes, it essentially sets a standard for development, helping solve disagreements, should they arise. And, through both self-evaluation and external appraisal, an organisation can examine the effectiveness of processes it utilises (or, should be utilising), establishing priorities for improvement.
Or, at least, that’s the theory.
The CMMI isn’t good for development.
The fundamental problem with the CMMI is that it’s a tool geared towards strategic management; that is, those making long-term, overall aims of the organisation. In nearly every sector, the further you progress into management, the less time you spend at the coalface.
Having spent time working at a large financial institution, with a ridiculously tall management structure, I’ve seen developers being hindered by processes implemented by unseen managers. The CMMI guidance notes state that the model should be supported by “the business needs and objectives of the organization”. The unfortunate reality was that the processes in place hindered development, but it reassured management that some work was being done, and provided them with a way to ensure they could tick all the right boxes.
That said, perhaps I’m biased – our team worked under the agile methodology, whose manifesto reads “individuals and interactions over processes and tools”: an absolute contradiction to what the CMMI proposes. Of course, the CMMI institute disagree: the two are completely harmonious…
The agile manifesto also prefers “responding to change over following a plan”, and yet organisations of higher CMMI ‘maturity’ tend to breed a risk-averse culture. Indeed, it has been proposed that the CMMI provides organisations (read: management) with an ‘acceptable way of failing’.
“…[with an acceptable way of failing], I can take credit for success and fend off blame much more easily than if I adopt a novel approach.”
Essentially the CMMI offers managers a ‘get-out clause’: if a project was unsuccessful, they can claim it’s because the organisation is only level ‘x’ appraised. If a project was successful, they can claim it’s because the organisation is level ‘x’ appraised. Either way, the failure usually boils down to management. Incidentally, the Standish Group attribute management as being the most important factor for success (or failure) in software projects.
The problem doesn’t only exist in management.
Consider the (potential) client. He’s looking around for a software shop to produce his latest (underspecified and needlessly complex..) project, and has read about the CMMI and how fantastic these ‘level 5’ organisations are.
So he starts comparing suppliers based on their CMMI level. Organisations, in response to his enquiries, tell of their appraised CMMI level, and clients will factor this in (most likely along with cost and time estimates). According to the CMMI specification, a higher-appraised organisation should be able to provide more accurate estimations, although a realistic – read: longer – estimate may be less favourable than an overoptimistic one. The client chooses the most affordable, but highest-appraised organisation, and politely declines the others.
As a result, organisations shift their focus from genuinely trying to mature their software process towards trying to ‘up’ their CMMI level. Interest is placed on the process, rather than the results and, if achieving the next level up becomes the goal, then the quality of software will suffer. Thus we have to put some blame on the client for compounding the problem of CMMI-dependence in the industry.
Of course, the ironic thing is that a higher CMMI level is absolutely no indicator of the quality of software that will be produced. The appraisal process is based on project(s) of an organisation’s choosing, and so being awarded a level provides no assurance that practices are consistent across the entire organisation. Further, there’s no guarantee that, as a client, your project will be developed following those same processes.
What’s the solution?
Well… there probably isn’t one. It may be that the CMMI goes out of fashion; fades away like many other wishy-washy management toolkits, but the unfortunate reality is that, currently, it’s widely-used for managing the development process in large organisations and isn’t likely to just disappear.
That said, perhaps one solution is for organisations to keep their appraised levels private: that is, make it an internal-only piece of information. That way, clients cannot use it when deciding which supplier to choose, removing the motivation for organisations to improve their level purely for level’s sake (and not that of actual maturity).
But then, what motivation is there for an organisation to keep something like this private (unless, maybe, they’ve been appraised at level 1..)? If the model’s rankings were portrayed as a ranking of undesirable traits, organisations might be less keen on publishing their appraisals.
This is the general idea behind the Capability Immaturity Model, as proposed by Finkelstein in 2006: here, levels range from 0 (“foolish”) to -2 (“lunatic”). Admittedly, the Capability Immaturity Model was published as something of a parodic effort, but there’s something to be said about its use of value inversion. A company appraised at level -1 is hardly going to want to publish the fact it’s regarded as “stupid”.
Unfortunately the same flaw exists in such a model: companies would strive to achieve “level 0” (which, perhaps, would end up being rebranded as “level-headed”(!)), working their way up from, say, level -4. We’d see a freak case of deflation, where ‘0’ becomes the new ‘5’, and ‘-4’ the new ‘1’.
I do feel there’s some benefit in certain software development companies – particularly ‘young’ organisations – following some of the principles of the CMMI. Honest self-evaluation is never a bad thing, and perhaps the CMMI provides the right notions to get startups looking at themselves and their working practices more critically. But it shouldn’t be anything more than that: a basic point of reference, to get you thinking about how you want to operate.
Ultimately, the CMMI is a flawed attempt at managing the management process. Ultimately, it hinders development and increases the workload of developers with no tangible gains. Ultimately, it gives clients a false sense of security and, ultimately, we’d be better off without it.
NB. References available over the fold
 M. C. Paulk, B. Curtis, M. B Chrissis, C. V. Weber, “Capability Maturity Model, Version 1.1”, 1993. Carnegie Mellon University.
 Software Engineering Institute, “Standard CMMI® Appraisal Method for Process Improvement (SCAMPI) A, Version 1.3: Method Definition Document”, 2011. Carnegie Mellon University.
 Software Engineering Institute, “The Capability Maturity Model: Guidelines for Improving the Software Process (SEI)”, 1995. Carnegie Mellon University, Addison Wesley.
 K. Beck, M. Beedle, A. van Bennekum, A. Cockburn, W. Cunningham, M. Fowler, J. Grenning, J. Highsmith, A. Hunt, R. Jeffries, J. Kern, B. Marick, R. C. Margin, S. Mallor, K. Shwaber, J. Sutherland, “The Agile Manifesto”, 2001.
 D. Clain, “CMMI announces trial period, limited-risk option for bundled payments”, The Advisory Board Company, 2012.
 D. J. Anderson, “CMMI Principles and Values”, Microsoft Developer Network, 2012.
 B. Barnett, “An Acceptable Way of Failing”, 2008. Cunningham & Cunningham, Inc.
 The Standish Group, “CHAOS Manifesto 2013: Think Big, Act Small”, 2003. The Standish Group International Inc.
 A. Finkelstein, “A Software Process Immaturity Model”, 1992. SIGSOFT Software Engineering Notes.