Are We Secure In Our Software Life Cycle?

Software security is an oft-forgotten part of the software development life cycle and as a result it often gets left as an afterthought. To combat this, a penetrate-and-patch approach is used, where problems are patched when they occur in the live software and this is how security flaws in programs are fixed. However, this methodology is flawed, as it leads to more patches on released software, due to the security holes that could have been resolved earlier and for much less cost, if it was done before release. [1]


Gilliam et al [2] propose a solution to this, arguing that security should be an integral part of development and integrated into the software life cycle.

They advocate using a Software Security Assessment Instrument (SSAI), which will be incorporated formally into the software life cycle, in an attempt to improve the overall security of the software. The SSAI is composed of a variety of components that catalogue, categorise and test what vulnerabilities / exposures exist in the software and picks out those that can be exploited.

Specifically, in this article [2], Gilliam talks about the Software security checklist (SSC), part of the SSAI, which aids organisations and system engineers in integrating security into their software life cycle and allocating their software a risk level rating, which will be useful for re-usable code. In addition, the SSC should also provide a security checklist for external release of the software.

Gilliam claims that in order to improve the security of software it will require “integrating security in the software life cycle… [as] an end-to-end process…” and this is something that I’m not in 100% agreement with. Using an SSAI and SSC for each stage of the development/maintenance life cycle is one which is too heavy on the developer and I believe that a less involved process should be used instead and this is based on certain beliefs and experiences I’ve had.


During the summer, I had an internship at a large financial institution, working on producing corporate applications for iPhones and iPads. Naturally, due to the nature of the content/information being handled, security was an important part of my team’s work.

However, the use of a security checklist as part of a larger SSAI, as suggested by Gilliam et al, was not the approach that was taken, at least, not completely.

Instead, developers were left to work on developing the functionality of apps that incorporated the in-house APIs, already developed, that were known to handle data securely. This saved an awful lot of time than if this were to be done for each separate app’s (or program’s) life cycle as suggested.

This approach is more efficient as it gives time for the developers to develop functionality, rather than worrying about checking off a security checklist. The accuracy of results from checklists are also doubtful, as items may be ticked without thorough investigation if deadlines were being rushed. This is even worse than having unsecure software, as management believes it’s secure!

Get ready to scrum

The rise of agile development practices has come about due to the realisation that the waterfall development model is fundamentally broken.[3] This means that the involved “end-to-end process” suggested by Gilliam would not be well suited to this current environment.

I experienced this first-hand during my job, as my team were developing in an agile-like development manner. I can’t see how such a security checklist, part of a SSAI, could fit into agile development style, except perhaps through consistent use of it on a daily/weekly basis.

If used in that way, I believe, it becomes a hindrance to development and will likely result in developers forgetting / not bothering to carry it out and leaving it till the end of development and then it’s not much better than the current approach of penetrate-and-patch.

Don’t worry, someone else will do it

Don’t get me wrong, I believe that software should definitely be tested for security before being released, however, I don’t think this should solely be the task of the developer but rather an external party.

This belief is founded upon my time at the company, where before the release of an app, an external party was brought in to test the code for security faults and vulnerabilities. They carried out an intensive week of software testing that, in my mind, is a much more viable way of validating security in programs. These teams were specialists on security vulnerabilities and much like the SSAI, have specific tools (test harnesses) that probed the software.

Feedback from the tests would be relayed to the development team and changes would be implemented in the program. If the software proved to be far too unsecure the external party would be brought in again to run tests after major changes had been made to the software.

If this had been done in-house, tools to realise the functionality of the SSAI would have to be brought in and run by developers of the software being tested. This approach would probably prove to be more costly in terms of price and hours, than if an external company had been brought in.

Don’t look at me, I’m the new guy!

If anyone who is part of the team on a temporary basis (contractors), they would need to be brought up to speed on a large amount of security procedure if it was heavily embedded into the software life cycle. This takes away valuable time that could be spent otherwise utilising the programmer’s capabilities.

I felt that during my job I didn’t need to worry about how I was coding in terms of security, which I would have had to if the SSC had been in place. I would be fearful that every line I wrote was incorrect as I hadn’t dealt with secure programming before, whereas, in reality, I was much more relaxed and able to program to the best of my ability.

Smashing stacks

This year I have been taking part in the secure programming course, which aims to encourage us to build new software securely by using techniques and tools to find and avoid flaws found in existing software. [4]

The way this is achieved, normally, is through common sense i.e. by not reproducing code that was found to be unsecure, rather than the formal approach described by Gilliam et al.

I think that this formal approach is perhaps an idealised attitude as to what should be happening and, in fact, for the majority of software life cycles, teams are more concerned with getting the bulk of the work done before focussing on how secure the product is.

But look, I’m secure!?

The security rating that could be provided by using an SSAI with an SSC could be very useful, as it would allow users of software to gauge how secure any data they input is and enable them to compare the security of similar products.

However, the consequences of this rating might not have the desired outcome. This is a similar problem to that which was seen before in the SAPM lectures [5], where companies would produce more features for their software in order to tick boxes, making it seem like they had the better product. However, in reality, the features weren’t desired by the users and only existed to make it appear like the software was better than its rivals, as it “ticked more boxes”.

Why does it matter?

But should we really care about software security in the software life cycle? I say yes, very much so.

As pointed out by Gilliam et al, several studies can be found showing that neglecting security in the software life cycle can have very negative impacts, both financially and in terms of image. [2]

They recommend that integrating security into the life cycle of the software can counteract this, but I disagree with them in terms of how much involvement it should have at each stage.

Their endorsement that this integration should be “end-to-end process” is one that is not carried out by an organisation who is heavily involved with secure programs, not best suited for the agile development style (that is rising in popularity) and is an out-dated and idealised view of how security can be integrated.

From my experience, I’ve decided that software security is best handled by external companies, who attack the software in order to identify weaknesses (ethical hacking). These can then be sealed / fixed with minimal effort (hopefully) and without the developers having to become experts at using security tools or looking for exploits.

In essence, leave security to the professionals.


[1] Gary McGraw & John Viega, ‘Introduction to Software Security: Penetrate and Patch Is Bad’, November 2001, [Accessed on: 4th February 2014].

[2] David P. Gilliam, Thomas L. Wolfe, Josef S. Sherif & Matt Bishop, “Software Security Checklist for the Software Life Cycle”, 2003, [Accessed on: 4th February 2014].

[3] Mark Baker, “Waterfall’s Demise and Agile’s Rise”, May 2012, [Accessed on: 5th February 2014].

[4] David Aspinall, “Secure Programming Lecture 1: Introduction”, January 2014, [Accessed on: 6th February 2014].

[5] Allan Clark, “Introduction”, January 2014, [Accessed on: 6th February 2014].

2 thoughts on “Are We Secure In Our Software Life Cycle?”

Comments are closed.