Formal verification in large-scaled software: Worth to ponder ?

Introduction

Testing is an integral part of software life-cycle where most professional companies invest heavily on. There is an endless debate on which conventional test methodology is better because all share one common characteristic: they are good at discovering system misbehaviors but incapable of ensuring correctness. Formal verification offers the programmer the tools to ensure (in various rigor levels) that his program meets the imposed specifications. Of course, this ability comes with a cost: formal methods can be either very computationally expensive or highly non-automated and thus not an option for the majority of software companies which value productivity over (the highest) quality. In this article we argue that formal verification has still a place in the industrial development cycle of large-scaled software despite its computational bottlenecks. As we are going to show, formal verification is essential for some parts of the software where it is hard for the programmer to predict all possible cases and thus the danger of software unexpected failure is evident.

Formal verification and its limitations

Formal verification methods are a set of mathematically-based techniques which draw mainly from logic calculi along with automata theory and provide a framework where the (software or hardware) system’s specifications can be expressed in a notation with precisely defined semantics. Furthermore, the validity of these specifications in the system can be proved with the use of inference. Formal verification methods can be divided in two main approaches [1]:

Proof-based verification: In a proof-based approach, the system is represented as a set of logical formulas S and the specification as another formula φ (usually of the same logic as the system). The verification is the process of trying to construct a proof which shows that S ⊢ φ (i.e. that φ is proved by S). This approach can deal with infinite state spaces but, due to undecidability, requires human guidance for non-trivial proofs.

Model-based verification: Here, the system is represented by a model Μ (a transition system) and the specification is verified upon that model with the use of a temporal logic. Temporal logic is a special kind of logic where the truth value of a formula does not have a constant value but changes dynamically as the system moves from one state to another. For a given formula (specification) φ expressed in temporal logic the model is checked for semantic entailment of φ (M ⊨ φ). The process of verification is fully automated because we are dealing with finite-state models.

Although recent studies [2] confirm the usefulness of formal methods in industry, none of them is widely adopted because of their scalability problems. Even for small programs, proof-based methods require a big number of proof obligations to be discharged with many of them being too complicated for automated theorem provers to deal with. For these proofs, human guidance is essential (at least for “breaking” the proof into simpler sub-goals) which is costly for the company. Similarly, the size of the transition system (the model M) grows exponentially in respect to how many variables are present in the system so, for very big systems, the number of states to-be-checked becomes intractable. This is known also as the “state explosion problem” [3]. The above reasons make the full-scaled appliance of formal methods in industrial projects infeasible.

Apply it where it matters

Apart from critical systems (e.g. medical software/hardware, automated pilot software or nuclear reactor management software), where full-scaled formal verification is the only option, it is our opinion that formal verification can be beneficial when applied in non-critical systems despite the extra cost. The idea is simple: formal verification should be applied only to the most error prone parts of the software and leave the rest for conventional testing techniques to deal with. Of course, the key here is to correctly identify which parts of the project are the most error prone. To answer to that question we first have to think about the people who actually make the software. No matter how experienced a programmer is, his mind has an upper threshold of how much information is capable of processing simultaneously. If this threshold is exceeded then the probability that a bug is introduced increases. For example a programmer is unlikely to make an error when writing output commands but the opposite is true when synchronizing threads. Given all this, we think that software complexity metrics [4] (such as cyclomatic/Halstead complexity, coupling, cohesion etc… ) must be utilized in a attempt to identify which software segments require deeper analysis and use formal methods only on them. The use of formal methods in these parts can be invaluable because they are able to reveal system inconsistencies and ambiguities that otherwise would be very difficult (or even impossible) to be discovered (e.g. a counter-example that contradicts a programmer’s assumption about the system). Finally, the insight gained from the application of formal methods may lead to some sort of refactoring which, in turn, could increase the system’s efficiency or extensibility.

Conclusion

In this article we presented the two main approaches of formal verification, their scalability problems and the way that industry should include them in software life-cycle. Complexity in code is the major factor that introduces difficult to detect bugs so the most powerful yet expensive verification techniques should be applied only on code segments where complexity metrics indicate potential danger.

References

[1]: M. Huth and M. Ryan, Logic in Computer Science: Modelling and
reasoning about systems. Cambridge University Press, 2004.
[2] J. Woodcock, P. G. Larsen, J. Bicarregui, and J. Fitzgerald, “Formal
methods: Practice and experience,” ACM Computing Surveys (CSUR),
vol. 41, no. 4, p. 19, 2009.
[3] A. Valmari, “The state explosion problem,” in Lectures on Petri nets I:
Basic models. Springer, 1998, pp. 429–528.
[4] Measuring software. A. Clark. University of Edinburgh, 2014

Response article: Being large and agile

Introduction

This post constitutes a response to the article entitled “Agile Methodologies in Large-Scale Projects: A Recipe for Disaster” posted by the user s1044415 on 14/02/2014 [1]. Starting from the agile manifesto [2], the author questions the feasibility of agile methods in large-scaled software. More specifically, for every of the so-called “agile values” presented in the manifesto, the author provides arguments which make the agile methodologies look problematic when implemented on real industry scenarios and support traditional heavy-weight development practices. Here, we argue that agile software development is something more than a bunch of guidelines and that it has its place (to a different extend) in every large-scaled project.

Client wants software not documents

The first step to become agile is to realize that, in the end, the customer wants an efficient piece of software which does what it is supposed to do. No one (maybe except from archivists) find diagrams and extensive documentation any useful given that the software works correctly and efficiently. There is nothing unprofessional in not providing extensive documentation. On the contrary, it is the definition of professionalism to provide the client a prototype as soon as possible and receive early feedback which could have a strong impact on the future development decisions (this is one of the core agile recommendations). The fact that many old-school businessmen feel secure when surrounded by diagrams is a result of habit and does not consist by itself an argument for not being agile.

At this point, it is important to emphasize that the power of agile practices derives from the fact that there are not strict rules to be followed as opposed to heavyweight processes. A direct consequence of this is that maintenance documentation is not prohibited but encouraged. Maintenance documentation records the architectural patterns after the completion of a module and provides information for further expansion capabilities but does not function as a guide during the construction. It is also much smaller in size (less effort for the developers) because it contains one (final) version and not multiple. Finally, the author provides an example in which extensive documentation is exploited as a proof in the hands of a government in the case of a project failure. Again, revised UML diagrams are unlikely to persuade other parties or individuals that their money were not wasted. On the other hand, clearly defined aims of the project combined with honest reasons of failure may have better luck. None of the above is discouraged by the agile standard.

Do processes and tools really work ?

It is claimed by the author that the reason processes and tools are used in large-scaled software is because they work. Let’s stand a minute and think of what we mean when we say that something “works”. If “works” means that, in the end, the client paid for a piece of software that was useless after a short period then yes, these processes and tools actually work. But if the objective is to provide the client with a piece of software that he can actually depend on long-term, processes and tools are not capable at all. The reason is rather simple: every project is unique in terms of aims and (possibly) of technological combination. These peculiarities demand innovative approaches and cannot be expressed in recipes. Software engineering is not cooking. At the same time, the re-use of an existing block of code is not something that is necessarily non-innovative. For various reasons the development team may decide that some repository code can be utilized in order to accelerate development without sacrificing code quality. The elegant combination of existing code still requires innovative minds since it is not by any means a trivial task. Moreover, the author claims that innovation and originality brings chaos. It is our opinion that chaos is brought by practices which promote isolation and competition among the team members (a common characteristic of the traditional methodologies). Agile recommendations considers communication as a crucial factor for success. Innovation comes through continuous collaboration among all the team members thus eliminating any danger of chaos.

Client as a co-developer

In his article, the author makes the assumption that the client knows what exactly he wants from the very beginning. This could be the case if the client is himself a software engineer but unfortunately this is not the case in most situations. Changing requirements is the most common cause of project failure. By agreeing on a contract from the very beginning the developers commit themselves in a target that in many cases may be infeasible or not what the client has in his mind. Human mind is able to acquire a much larger amount of information from something that it observes compared to a description. Agile standard exploits that property by recommending the construction of a prototype as soon as possible in order to receive an initial feedback from the client and then a continuous flow of releases where more feedback can be acquired. By this way the developers are in a position to identify possible misconceptions on time and change their plans accordingly. Continuous integration with the client do not expose the developers in any danger of exploitation. This is because there is a distinct line between the core functionality of a project and additional features which an experienced programmer is in position to identify. After all, if the client is willing to wait and pay more for some added functionality then so be it! Finally the author argues that a clearly defined plan increases the chances of a good estimation on time and budget. We strongly disagree with this notion because we believe that an early prototype offers an invaluable insight to developers regarding the projects peculiarities (because it exposes them directly to implementation rather than chatting while looking diagrams) and thus a much more accurate estimation would be possible.

The one who adapts survives

Given the current fast-changing business environment it is our opinion that following a strict plan from the very beginning is simply not an option anymore in software industry. Things change rapidly and along with them the clients requirements. If a software company wants to be successful (by providing their clients competitive software) it should welcome late changes in requirements. This is the main reason that agile methods are becoming more and more popular. We agree with the author in that the information flow in large-scaled project is a very important factor for success and requires rigorous procedures if confusion is to be avoided. But in this moment is where the “loose” nature of agile recommendations shines. No-one says that in order someone to be agile must follow by-the-book all the principles. Intentionally these principles are vague because their goal is to provide a high level way of thinking, not a look-up table for developers to use. Project managers (yes, there is not a rule which forbids managers!) can use hybrid approaches which on the one hand are conventional in terms of communication among separate teams but on the other hand utilize all the benefits agile standard has to offer in all other aspects of the development software process. Manager is tasked the weight of defining the threshold of where agile stops and typical procedures start.

Conclusion

In this article we tried to give justified counter-arguments in response to [1] regarding the application of agile methods in industry. Agile recommendation have much to offer to large-scaled project development and any limitations can be overcome by utilizing hybrid approaches. The intentional vagueness in the agile principles as presented in [2] is the key which makes them applicable to all software projects.

References

[1] https://blog.inf.ed.ac.uk/sapm/2014/02/14/agile-methodologies-in-large-scale-projects-a-recipe-for-disaster/
[2] http://agilemanifesto.org/
[3] Get ready for agile methods, with care. B. Boehm. IEEE Computer 35(1):64-69, 2002.
[4] Software development methodologies. A. Clark. University of Edinburgh, 2014

(Re)Estimation: A brave man’s task

So the deal is closed, the client has just ordered a large-scaled software package and the company directors summon the project manager asking him for an estimation over the time and human resources required for the project’s successful implementation. In addition to the numerous variables that must be taken into account for the estimation, the manager must come with an answer which abides by the resource constraints imposed by the upper-level management and have to do with the company’s policy and/or specific commitments to the client. Based on the work of Frederick P. Brooks JR. [1], in this article we present a fundamental misconception of managers regarding cost estimation and we argue that there are two circumstances where a good manager should be brave enough to dispute the imposed constraints. Finally, we discuss the advances in software engineering research concerning resource estimation which can aid the manager in supporting his opinion.

Being pressurized by their superiors, many managers fall into the trap of treating manpower and time as interchangeable components (hence the title of [1]: “The Mythical Man-Month”). In software development process there is not necessarily a one-to-one proportional relationship between human resources and development time due to sequential bottlenecks. No matter how many programmers are assigned to a project, possible dependencies between sub-tasks could act as an inhibitory factor for reducing the development time (at least by a percentage analogous to manpower). One could argue that, with proper planning, the workforce could initially concentrate its efforts towards the core components of the system and then work on extensions that are independent. This is wrong because of another factor that affects productivity: coordination. The number of communications among programmers working on the same task increases exponentially as more people are added to the group and a large amount of time is wasted in trying to resolve disagreements and ensure that all members follow the same strategy when developing. So a good manager should have the courage to present an initial time estimation that violates the one imposed by the board if he envisages that the development process is infeasible within the time restriction, regardless of the manpower available. By doing so, he ensures that there will be no future changes in the schedule.

But what if the manager hesitated to disagree with his superiors and accepted the requirements? What is he supposed to do now when the first landmark has not been achieved and the irrationality of the initial estimation is becoming more and more apparent? This is another chance for him to show off his mentality and propose a radically altered estimation. Ideally, the new estimation should vary a lot from the initial for two reasons: Firstly, there must be enough time for all the development stages including possible implementation changes in case of future test failures. Secondly, it is essential that no other rescheduling takes place. Making continuous changes in the schedule is frustrating for the board, the customer and, more importantly, the developers. This re-estimation can be effectively combined with a reduction of the task where the developing team will only focus on the essential functionality of the system and postpone any extra stuff for later releases. The easy solution of adding programmers to the project can be disastrous for the same reasons as before plus one more: the newcomers would need time get into things and understand the concepts of the project. Not only these newcomers are not going to be productive for some time (typically weeks) but the remaining staff will bear the burden of their training. As the writer states in [1]: “adding manpower to a late software project makes it later” (this is known as the Brook’s law).

In the early days of professional software engineering, project managers were in a far worse position than today when it comes to resource estimation debate. Back then, there were no standard cost estimation methods and a manager had only his experience and intuition to protest against the non-technical and purely managerial-oriented arguments of his superiors. The year 1981 is considered a landmark for software project estimation because of the introduction of the COCOMO (COnstructive COst MOdel) in the book “Software Engineering Economics” by Barry Boehm [2]. It was the first time that software engineering was approached systematically from an economic perspective. Since then, a range of models have been developed for the resource estimation including COCOMO II [3] , an updated version of COCOMO designed to operate on state-of-the-art projects. Despite the progress that has been made in the field, these models are not a panacea when it comes to resource estimation. There are powerful tools in the manager’s archery but, given the peculiarities of each individual project, human judgment is still a crucial factor for estimation. The project manager still needs to be brave.

In this article we demonstrated how irrational requirements from non-technical personnel combined with a fundamental misconception of the project manager are able to affect software’s development cycle. We presented the reasons why a manager should be realistic regarding his resource estimation even if that means a conflict with his superiors and that nowadays he is able to partially support his opinion with commonly accepted tools.

[1]: Brooks, Frederick P. The mythical man-month. Vol. 1995. Reading: Addison-Wesley, 1975.
[2]: Boehm, Barry W. “Software engineering economics.” Software Engineering, IEEE Transactions on 1 (1984): 4-21.
[3]: Boehm, Barry W., Ray Madachy, and Bert Steece. Software Cost Estimation with Cocomo II with Cdrom. Prentice Hall PTR, 2000.
[4]: Pyster, Arthur B., and Richard H. Thayer. “Guest Editors’ Introduction: Software Engineering Project Management 20 Years Later.” Software, IEEE 22.5 (2005): 18-21.