BSc Computer Science, huh? Who cares.

As the software industry continues its massive growth, we are witnessing an increasing number of graduate roles available, with positions such as “software engineer”, “application developer”, “<insert language here> developer” being a common sight on careers websites. Indeed, it’s the natural course of progression for most higher education graduates to head into a job, with around 91% obtaining employment within 6 months of graduation[1].

However, computer science graduates have among the lowest employment rates of any higher education subject area[2], which is entirely at odds with this expanding ‘IT’ industry. The reason? Employers don’t care for your first-class BSc Computer Science degree, nor your student BCS membership, or even that Natural Language Processing class you did on Coursera last summer.

The unfortunate reality is that simply holding a higher education does not qualify you for a job in large-scale development. In fact, structured education during employment – all those cutting-edge certifications you’ve achieved, or that professional accreditation hanging on your wall – aren’t what’s required. What it really takes is experience. And you can’t study that.

This article will examine the inadequacy (and unsuitability) of ‘recognisable’ education in the software industry, with a brief discussion of the use(lessness) of professional accreditation.

“Required: BSc Computer Science (or equivalent) at 2:1 or above”
More and more employers are requiring a minimum of a 2:1 degree from their candidates. Why? Because it’s become the de facto standard[3]. A candidate who holds a degree is not necessarily any better for a position than one without; they’re simply educated to a higher level. And employers know this: they’ve learned the hard way, but continue to ask for it nonetheless.

The reason a 2:1 isn’t enough is because of the shortcomings of a computer science-related degree. Great, you understand Gröbner bases of ideals of polynomials – but do you know how to deal with a colleague who’s insisting you’re wrong, against your better judgement? Oh, you can prove under what circumstances a binary relation is an observable bisimulation? Fantastic – but how would you go about developing a rapport with clients? Oh, you know Scala? Cool. We don’t use that here.

Whilst there’s something of a misalignment between the expectations of academics and employers, there’s also a problem of unclear degree titles. For example, I’m studying for a BSc in “Computer Science”, but my combination of course choices is entirely compatible with a “Software Engineering” degree. A vague notion exists that a Computer Science degree is more technical, mathematical, or even ‘harder’, but this needn’t be true.

And so with such uncertainties, employers are forced to look for other indicators of ability. I recently had a series of interviews with a large financial institution for a software developer position. They weren’t interested in hearing about my degree. They were interested in hearing about my work experience and, more importantly, my people experience. They were more enthusiastic about my part-time catering and retail jobs than they were about my studies. Technical competency was assessed, but I got the distinct impression that they weren’t worried about my (many) technical skills gaps: those will quickly be filled within a few weeks of working.

To be fair, some of the skills required of a competent software engineer are fundamentally unteachable – and it’s certainly not the responsibility of a lecturer to attempt to. But having completed nearly four years of a degree, I must confess that, personally, I don’t feel well-equipped to enter the world of development. At least, not from a technical perspective. It’s all very well teaching the theory and advantages of various practical systems, but failing to provide a platform to try them out makes that knowledge pretty useless.

What compounds the problem, of course, is that the academics who are designing and delivering these courses often haven’t worked in industry for some time. They have grown in their own community, with its own set of (ahem, abstract) expectations. These expectations don’t exactly intersect with the needs of modern software houses, and instead of encouraging group projects and collaborative work between students, it’s a constant barrage of exercises for exercises’ sake.

And so employers are having to take their own measures to fill those skills gaps in their newfound employees.

Education continues into employment
One such measure is the use of accreditation. At the individual level, accreditation is “a form of qualified status … awarded by a professional or regulatory body that confirms an individual as fit to practise”[4]. Before you can be awarded an accredited title, you must have gained a minimum number of years’ relevant experience, as well as pass several exams. Unlike degrees, however, software houses seem to love accredited individuals: the ‘Chartered’ label is something of a status symbol, which they can use to lure new clients. In an earlier blog post, I wrote about how organisations often misuse such information to their advantage – this is a classic case here.

Interestingly, however, software developers seem to love it too: it affords them the opportunity of fast career progression. Often developers will be proactive about seeking out accreditation for themselves, since it will open the door to promotions and, ergo, pay rises. The ironic thing, however, is that the natural course of a software developer’s career in large organisations is the move from being a developer, through to making long-term design and architectural decisions, before finding yourself plonked in the management seat, directing a team and its budget, and not doing any implementation work yourself. This makes it harder for you to remain ‘current’, especially in such a fast-paced industry. The use of accreditation, then, serves only to push a developer into a particular career track, and not to benefit the software projects they’re working on.

Accreditation in an immature discipline is fundamentally valueless
Let’s take a moment to look further into accreditation in the software industry. Some careers require registration and accreditation from a professional body – for example, an architect operating in the UK must be registered with the Architects Registration Board, and only a Chartered Accountant is allowed to audit the accounts of public companies [5]. But this isn’t the case for a software engineer. In fact, receiving accreditation from, say, the British Computer Society (“The Chartered Institute for IT”), doesn’t qualify one to do anything more than they could have done previously.

This prompts us to think, why? Architects have to design a safe and structurally sound building. Chartered Accountants are reporting on the finances (read: honesty) of companies in which a huge number of people have a financial stake. Other registered professions – doctors, dentists, lawyers, for example – all have a duty to the public. And we might argue that software engineers should, too. A flight control system, or the software backing a nuclear reactor, or your electronic summary care record – these systems are all ‘critical’, but don’t (legally) require a professional graded ‘engineer’ to develop them. This, I propose, comes down to the fact that software engineering is far too immature a discipline to support professional body accreditation.

Perhaps this problem is rooted in the idea that, despite unrelenting advancements in the software industry, it’s rare to find a universally-agreed “best approach” to a given problem. All we can do is depend upon the experience of long-term developers, and hope that the knowledge they’ve gained over the years is enough to get us through. No amount of PhDs or MInfs will save you now.

Do we need formal education at all?
Oh, yes. When else in your lifetime will you get 5-month-long summer holidays?

Ultimately, it boils down to this: university education doesn’t make you a better programmer; it doesn’t make you a better team player; and it doesn’t teach you the core skills you need to nail that job. What really matters is experience: the longer you work, the wiser you’ll become.

And no matter how hard you try, you can’t study ‘work experience’. You actually have to experience it for yourself.

Continue reading “BSc Computer Science, huh? Who cares.”

“Leave security to the professionals”? We are the professionals.

This discussion is a response to Euan Maciver’s post, “Are We Secure In Our Software Life Cycle?”.

In his article, Maciver deals with the (un)suitability of approaches to ensuring software security. Gilliam, Wolfe, Sherif and Bishop (2003) present the “Software Security Assessment Instrument”: a method of incorporating security checklists and assessment tools into the development lifecycle[1]. Maciver disagrees with Gillam et al., examining why their methodology is unsuitable, incompatible and inefficient, given contemporary software development practices. Instead he suggests that developers should focus not on security (but rather on implementation of functionality), and instead external security experts should be brought in to carry out this work. A counter-argument will be made in this post, exploring why security shouldn’t be left to external specialists and why Gilliam et al.’s proposal is, in actuality, a sensible one.

Security is the responsibility of the developer
Maciver’s argument focuses around the idea that, as software developers, we shouldn’t be the ones responsible for ensuring security is built into the code we work on. His solution is to give developers the freedom to write code without the worry of security vulnerabilities, but instead to pass this on to subcontracted professionals who will perform ‘audits’ of code. Their role, he explains, is to ‘probe’ the software and to relay feedback and results to the development team.

The problem here, however, is that it will lead to developers developing without security in mind. Indeed, Maciver speaks from experience, writing,

“I would be fearful that every line I wrote was incorrect as I hadn’t dealt with secure programming before, whereas, in reality, I was much more relaxed and able to program to the best of my ability”

This illustrates the idea that, as programmers distance themselves from the very concept of programming securely, their code will invariably become less and less secure. Ultimately, it’ll become a massive task to perform a security audit – even for a specialist firm – and the feedback received from these specialists may necessitate such an overwhelming rewrite so as to write-off the project.

Programming securely is a necessity, and yes, third-party auditing firms are a great resource for aiding this, but it’s not something which should be left to someone else. By doing so, you immediately make yourself vulnerable. Maciver cites the example of working for a large financial institution; given that the competition between such firms is notoriously rife, one would need to be absolutely certain of their auditor’s intentions. Can we absolutely rule out the idea of industrial espionage? Backdoors might be left open, and their existence made known to the wrong people at the wrong time.

This, in itself, raises an interesting concern around the difficulties of responsibility, blame, and economics of a dependence on external security specialists: if a vulnerability is discovered whilst the system is live, who is to blame? The developer who wrote the bad code in the first place? Or the security ‘expert’ who missed it? (Of course, blame shouldn’t matter – fixing it should – but the argument (eg. here, here) of publishing “root cause” within a team is an interesting one.)

So the role of these security professionals needs to be well-defined: are they responsible for just identifying vulnerabilities, or are they the ones to patch as well? If they simply identify issues, as Maciver proposes, then I agree to some extent: this would be a potentially viable model for development, allowing programmers to learn from their mistakes through participation in code reviews. But these code reviews should be with the security professionals themselves, which is a two-fold contradiction of Maciver’s model: it places the responsibility of caring about security on the developer, and it means that the security professionals are going beyond their role of simply identifying issues.

Leaving the security audit to be performed after implementation is somewhat reminiscent of the waterfall model itself; almost as if it’s an additional stage after implementation. This puts an immense pressure on the security specialists: what if there isn’t enough time for a full-and-thorough audit to be performed before software is due to be deployed? Or is security prioritised over the likes of acceptance testing? It seems like the best solution is to consider security throughout development, ergo minimising this risk. But this would necessitate that security considerations are made by the developers themselves…

Security can be built into a developer’s workflow
And it should be the developers who are taking security into consideration. Gilliam’s SSAI encourages the use of checklists (which do, of course, bring their own set of problems) – but even the writing of a checklist gets teams thinking about how they can organise the matter of security.

Given the recent trend towards Agile methodologies, Maciver remarks that he “…can’t see how such a security checklist … could fit into agile development style…”. Just as tools such as Jenkins or Apache Continuum have become popular in continuous integration (CI), various security-based CI tools exist; for example, the OWASP Zed Attack Proxy (ZAP) project. These programs typically provide automated scanners for finding vulnerabilities in applications developed with CI in mind. Thus there are, in fact, ways of incorporating the security instruments as Gilliam et al. propose into a team who follow the Agile principles, as well as for those who prefer more traditional software processes.

Indeed, by incorporating security into the everyday workflow of a team, this approach will get developers keeping a mental checklist as they develop; ‘will this line of code flag up as a vulnerability on our ZAP system’? For the inexperienced developer, this may take some time to learn – but it’s a necessary skill, and one which can’t be ignored for the sake of implementing functionality. Arguably, security should be considered right from the beginning of a project, during its initial design. This, unfortunately, could not be facilitated through Maciver’s model, with the audit being performed only on an implemented product. By this point, it might be too late to correct fundamental security flaws, leading to hodgepodge patching right from the first deployment.

Ultimately, Maciver’s idea, as presented, is not fundamentally different from the penetrate-and-patch method he criticises. If audits are left until the software receives “major changes” (and, incidentally, how do we define ‘major’? – even the most minor of changes could create huge vulnerabilities), then this is, in itself, a penetrate-and-patch approach. Rather than the users being the ones to find the issues, they’re simply caught by a different group of individuals (with, perhaps, unknown interests..).

Yes, security audits are beneficial – nay, necessary – in large-scale software development, but they should not be used as an excuse for programmers, designers and testers not to be thinking about security at every moment. Otherwise they’ll become complacent. And complacency breeds mediocrity.

And who wants to be mediocre?
Continue reading ““Leave security to the professionals”? We are the professionals.”

The Capability Maturity Model: not so capable…

It’s certainly not a recent realisation that software projects are often delivered late, over-budget, or not to specification, if at all. In an attempt to address this, the “Capability Maturity Model” was proposed, with the goal of aiding management and development of long-term software projects in a disciplined and structured way; all focused around the concept of ‘maturity’.

We shall be discussing the Capbility Maturity Model Integration (CMMI; a more recent variant of the CMM), why it is harmful to the software process, and who is to blame.

How do we define the idea of ‘maturity’?
Paulk, Curtis, Chrissis and Weber (1993) define[1] ‘maturity’, in the context of software development processes, as,

“…the extent to which a specific process is explicitly defined, managed, measured, controlled, and effective.”

They go on to note that, as an organisation gains in maturity, it “institutionalizes its software process via policies, standards, and organizational structures”. Perhaps it would be useful to contrast this with the authors’ definition[1] of ‘immaturity’:

“In an immature organization, software processes are generally improvised by practitioners and their managers during a project.”

It certainly seems conceivable that projects which are “improvised” will very likely be mishandled with regard to the typical management triple of schedule, cost, and scope. This, however, raises the question of how one can go about attaining ‘maturity’, and what the CMMI does to facilitate this.

Maturity comes from small steps, not a giant leap.
Rather than take drastic or grand measures to improve themselves, then, Paulk et al. argue that organisations would be better off taking small incremental steps to maturity; that is to say, evolution is preferred to innovation – at least in the context of the software development process.

This is the fundamental idea behind the Capability Maturity Model; it provides a framework for organising such incremental steps, by placing them in five distinct levels. With each level comes a set of goals which facilitate the measuring and evaluating of process maturity, ultimately with the goal of increasing the process improvement. This rigid structure, however, is a major shortcoming, as we will shall discuss later.

First let us briefly define each of the five CMMI maturity levels, and outline the fundamental requirements in order to be appraised at each. Note that, in order to progress up a level, an organisation must be appraised by a CMMI appraisal officer, who examines processes, documentation and working methods within the organisation.

Level 1: “initial” (or “chaotic”)
The first level, “Initial”, is used as a basis of comparison for subsequent levels. An organisation regarded as being at level 1 on the model wouldn’t have a stable development and maintenance processes, and any success is usually attributable to certain individuals, rather than the organisation as a whole.

Level 2: “repeatable”
At the “repeatable” level, an organisation will have policies for managing a software project, with planning decisions based on results of previous projects. In a nutshell, in order to be level 2 appraised, an organisation must have installed policies which help project managers establish management processes.

Level 3: “defined”
Here an organisation will have a ‘defined’ (viz. standard) software process, which covers both software development and management processes. These must be integrated into the organisation as a whole, as appraisal depends on the organisation-wide understanding of activities, roles, and responsibilities in such processes.

Level 4: “managed”
In the “Repeatable” level, an organisation sets quantitative goals for processes, performing consistent (and well-defined) measurements of project quality against these. By this stage, produced software is of a predictably high quality, and appraisal is offered on the basis of the organisation being able to effectively measure and assess its risk and capabilities.

Level 5: “optimising”
Level 5 organisations are said to be “optimising” – that is, they focus on continuous process performance improvement, through both innovative and incremental improvements. Ultimately, certification comes from the fact that, in level 5 organisations, process improvements are planned, managed and treated in the same way as ordinary business activities.

Why use the CMMI?
The CMMI allows its users to really focus its efforts on improvement, yet still being aware of the larger scheme of things. By mandating strict documenting of processes, it essentially sets a standard for development, helping solve disagreements, should they arise. And, through both self-evaluation and external appraisal, an organisation can examine the effectiveness of processes it utilises (or, should be utilising), establishing priorities for improvement.

Or, at least, that’s the theory.

The CMMI isn’t good for development.
The fundamental problem with the CMMI is that it’s a tool geared towards strategic management; that is, those making long-term, overall aims of the organisation. In nearly every sector, the further you progress into management, the less time you spend at the coalface.

Having spent time working at a large financial institution, with a ridiculously tall management structure, I’ve seen developers being hindered by processes implemented by unseen managers. The CMMI guidance notes state that the model should be supported by “the business needs and objectives of the organization”[3]. The unfortunate reality was that the processes in place hindered development, but it reassured management that some work was being done, and provided them with a way to ensure they could tick all the right boxes.

That said, perhaps I’m biased – our team worked under the agile methodology, whose manifesto reads “individuals and interactions over processes and tools”[4]: an absolute contradiction to what the CMMI proposes. Of course, the CMMI institute disagree: the two are completely harmonious

The agile manifesto also prefers “responding to change over following a plan”[4], and yet organisations of higher CMMI ‘maturity’ tend to breed a risk-averse culture[5][6]. Indeed, it has been proposed that the CMMI provides organisations (read: management) with an ‘acceptable way of failing’[7].

“…[with an acceptable way of failing], I can take credit for success and fend off blame much more easily than if I adopt a novel approach.”

Essentially the CMMI offers managers a ‘get-out clause’: if a project was unsuccessful, they can claim it’s because the organisation is only level ‘x’ appraised. If a project was successful, they can claim it’s because the organisation is level ‘x’ appraised. Either way, the failure usually boils down to management. Incidentally, the Standish Group attribute management as being the most important factor for success (or failure) in software projects.[8]

The problem doesn’t only exist in management.
Consider the (potential) client. He’s looking around for a software shop to produce his latest (underspecified and needlessly complex..) project, and has read about the CMMI and how fantastic these ‘level 5’ organisations are.

So he starts comparing suppliers based on their CMMI level. Organisations, in response to his enquiries, tell of their appraised CMMI level, and clients will factor this in (most likely along with cost and time estimates). According to the CMMI specification, a higher-appraised organisation should be able to provide more accurate estimations, although a realistic – read: longer – estimate may be less favourable than an overoptimistic one. The client chooses the most affordable, but highest-appraised organisation, and politely declines the others.

As a result, organisations shift their focus from genuinely trying to mature their software process towards trying to ‘up’ their CMMI level. Interest is placed on the process, rather than the results and, if achieving the next level up becomes the goal, then the quality of software will suffer. Thus we have to put some blame on the client for compounding the problem of CMMI-dependence in the industry.

Of course, the ironic thing is that a higher CMMI level is absolutely no indicator of the quality of software that will be produced. The appraisal process is based on project(s) of an organisation’s choosing[2], and so being awarded a level provides no assurance that practices are consistent across the entire organisation. Further, there’s no guarantee that, as a client, your project will be developed following those same processes.

What’s the solution?
Well… there probably isn’t one. It may be that the CMMI goes out of fashion; fades away like many other wishy-washy management toolkits, but the unfortunate reality is that, currently, it’s widely-used for managing the development process in large organisations and isn’t likely to just disappear.

That said, perhaps one solution is for organisations to keep their appraised levels private: that is, make it an internal-only piece of information. That way, clients cannot use it when deciding which supplier to choose, removing the motivation for organisations to improve their level purely for level’s sake (and not that of actual maturity).

But then, what motivation is there for an organisation to keep something like this private (unless, maybe, they’ve been appraised at level 1..)? If the model’s rankings were portrayed as a ranking of undesirable traits, organisations might be less keen on publishing their appraisals.

This is the general idea behind the Capability Immaturity Model, as proposed by Finkelstein in 2006: here, levels range from 0 (“foolish”) to -2 (“lunatic”)[9]. Admittedly, the Capability Immaturity Model was published as something of a parodic effort, but there’s something to be said about its use of value inversion. A company appraised at level -1 is hardly going to want to publish the fact it’s regarded as “stupid”.

Unfortunately the same flaw exists in such a model: companies would strive to achieve “level 0” (which, perhaps, would end up being rebranded as “level-headed”(!)), working their way up from, say, level -4. We’d see a freak case of deflation, where ‘0’ becomes the new ‘5’, and ‘-4’ the new ‘1’.

That said…
I do feel there’s some benefit in certain software development companies – particularly ‘young’ organisations – following some of the principles of the CMMI. Honest self-evaluation is never a bad thing, and perhaps the CMMI provides the right notions to get startups looking at themselves and their working practices more critically. But it shouldn’t be anything more than that: a basic point of reference, to get you thinking about how you want to operate.

Ultimately…
Ultimately, the CMMI is a flawed attempt at managing the management process. Ultimately, it hinders development and increases the workload of developers with no tangible gains. Ultimately, it gives clients a false sense of security and, ultimately, we’d be better off without it.

NB. References available over the fold
Continue reading “The Capability Maturity Model: not so capable…”