“Leave security to the professionals”? We are the professionals.

This discussion is a response to Euan Maciver’s post, “Are We Secure In Our Software Life Cycle?”.

In his article, Maciver deals with the (un)suitability of approaches to ensuring software security. Gilliam, Wolfe, Sherif and Bishop (2003) present the “Software Security Assessment Instrument”: a method of incorporating security checklists and assessment tools into the development lifecycle[1]. Maciver disagrees with Gillam et al., examining why their methodology is unsuitable, incompatible and inefficient, given contemporary software development practices. Instead he suggests that developers should focus not on security (but rather on implementation of functionality), and instead external security experts should be brought in to carry out this work. A counter-argument will be made in this post, exploring why security shouldn’t be left to external specialists and why Gilliam et al.’s proposal is, in actuality, a sensible one.

Security is the responsibility of the developer
Maciver’s argument focuses around the idea that, as software developers, we shouldn’t be the ones responsible for ensuring security is built into the code we work on. His solution is to give developers the freedom to write code without the worry of security vulnerabilities, but instead to pass this on to subcontracted professionals who will perform ‘audits’ of code. Their role, he explains, is to ‘probe’ the software and to relay feedback and results to the development team.

The problem here, however, is that it will lead to developers developing without security in mind. Indeed, Maciver speaks from experience, writing,

“I would be fearful that every line I wrote was incorrect as I hadn’t dealt with secure programming before, whereas, in reality, I was much more relaxed and able to program to the best of my ability”

This illustrates the idea that, as programmers distance themselves from the very concept of programming securely, their code will invariably become less and less secure. Ultimately, it’ll become a massive task to perform a security audit – even for a specialist firm – and the feedback received from these specialists may necessitate such an overwhelming rewrite so as to write-off the project.

Programming securely is a necessity, and yes, third-party auditing firms are a great resource for aiding this, but it’s not something which should be left to someone else. By doing so, you immediately make yourself vulnerable. Maciver cites the example of working for a large financial institution; given that the competition between such firms is notoriously rife, one would need to be absolutely certain of their auditor’s intentions. Can we absolutely rule out the idea of industrial espionage? Backdoors might be left open, and their existence made known to the wrong people at the wrong time.

This, in itself, raises an interesting concern around the difficulties of responsibility, blame, and economics of a dependence on external security specialists: if a vulnerability is discovered whilst the system is live, who is to blame? The developer who wrote the bad code in the first place? Or the security ‘expert’ who missed it? (Of course, blame shouldn’t matter – fixing it should – but the argument (eg. here, here) of publishing “root cause” within a team is an interesting one.)

So the role of these security professionals needs to be well-defined: are they responsible for just identifying vulnerabilities, or are they the ones to patch as well? If they simply identify issues, as Maciver proposes, then I agree to some extent: this would be a potentially viable model for development, allowing programmers to learn from their mistakes through participation in code reviews. But these code reviews should be with the security professionals themselves, which is a two-fold contradiction of Maciver’s model: it places the responsibility of caring about security on the developer, and it means that the security professionals are going beyond their role of simply identifying issues.

Leaving the security audit to be performed after implementation is somewhat reminiscent of the waterfall model itself; almost as if it’s an additional stage after implementation. This puts an immense pressure on the security specialists: what if there isn’t enough time for a full-and-thorough audit to be performed before software is due to be deployed? Or is security prioritised over the likes of acceptance testing? It seems like the best solution is to consider security throughout development, ergo minimising this risk. But this would necessitate that security considerations are made by the developers themselves…

Security can be built into a developer’s workflow
And it should be the developers who are taking security into consideration. Gilliam’s SSAI encourages the use of checklists (which do, of course, bring their own set of problems) – but even the writing of a checklist gets teams thinking about how they can organise the matter of security.

Given the recent trend towards Agile methodologies, Maciver remarks that he “…can’t see how such a security checklist … could fit into agile development style…”. Just as tools such as Jenkins or Apache Continuum have become popular in continuous integration (CI), various security-based CI tools exist; for example, the OWASP Zed Attack Proxy (ZAP) project. These programs typically provide automated scanners for finding vulnerabilities in applications developed with CI in mind. Thus there are, in fact, ways of incorporating the security instruments as Gilliam et al. propose into a team who follow the Agile principles, as well as for those who prefer more traditional software processes.

Indeed, by incorporating security into the everyday workflow of a team, this approach will get developers keeping a mental checklist as they develop; ‘will this line of code flag up as a vulnerability on our ZAP system’? For the inexperienced developer, this may take some time to learn – but it’s a necessary skill, and one which can’t be ignored for the sake of implementing functionality. Arguably, security should be considered right from the beginning of a project, during its initial design. This, unfortunately, could not be facilitated through Maciver’s model, with the audit being performed only on an implemented product. By this point, it might be too late to correct fundamental security flaws, leading to hodgepodge patching right from the first deployment.

Ultimately, Maciver’s idea, as presented, is not fundamentally different from the penetrate-and-patch method he criticises. If audits are left until the software receives “major changes” (and, incidentally, how do we define ‘major’? – even the most minor of changes could create huge vulnerabilities), then this is, in itself, a penetrate-and-patch approach. Rather than the users being the ones to find the issues, they’re simply caught by a different group of individuals (with, perhaps, unknown interests..).

Yes, security audits are beneficial – nay, necessary – in large-scale software development, but they should not be used as an excuse for programmers, designers and testers not to be thinking about security at every moment. Otherwise they’ll become complacent. And complacency breeds mediocrity.

And who wants to be mediocre?

References
[1] D. P. Gilliam, T. L. Wolfe, J. S. Sherif, M. Bishop, “Software Security Checklist for the Software Life Cycle”, 2003. Proceedings of the Twelfth International Workshop on Enabling Technologies: Infrastructure for Collaborative Enterprises.