# Formal verification in large-scaled software: Worth to ponder ?

Introduction

Testing is an integral part of software life-cycle where most professional companies invest heavily on. There is an endless debate on which conventional test methodology is better because all share one common characteristic: they are good at discovering system misbehaviors but incapable of ensuring correctness. Formal verification offers the programmer the tools to ensure (in various rigor levels) that his program meets the imposed specifications. Of course, this ability comes with a cost: formal methods can be either very computationally expensive or highly non-automated and thus not an option for the majority of software companies which value productivity over (the highest) quality. In this article we argue that formal verification has still a place in the industrial development cycle of large-scaled software despite its computational bottlenecks. As we are going to show, formal verification is essential for some parts of the software where it is hard for the programmer to predict all possible cases and thus the danger of software unexpected failure is evident.

Formal verification and its limitations

Formal verification methods are a set of mathematically-based techniques which draw mainly from logic calculi along with automata theory and provide a framework where the (software or hardware) system’s specifications can be expressed in a notation with precisely defined semantics. Furthermore, the validity of these specifications in the system can be proved with the use of inference. Formal verification methods can be divided in two main approaches [1]:

Proof-based verification: In a proof-based approach, the system is represented as a set of logical formulas S and the specification as another formula φ (usually of the same logic as the system). The verification is the process of trying to construct a proof which shows that S ⊢ φ (i.e. that φ is proved by S). This approach can deal with infinite state spaces but, due to undecidability, requires human guidance for non-trivial proofs.

Model-based verification: Here, the system is represented by a model Μ (a transition system) and the specification is verified upon that model with the use of a temporal logic. Temporal logic is a special kind of logic where the truth value of a formula does not have a constant value but changes dynamically as the system moves from one state to another. For a given formula (specification) φ expressed in temporal logic the model is checked for semantic entailment of φ (M ⊨ φ). The process of verification is fully automated because we are dealing with finite-state models.

Although recent studies [2] confirm the usefulness of formal methods in industry, none of them is widely adopted because of their scalability problems. Even for small programs, proof-based methods require a big number of proof obligations to be discharged with many of them being too complicated for automated theorem provers to deal with. For these proofs, human guidance is essential (at least for “breaking” the proof into simpler sub-goals) which is costly for the company. Similarly, the size of the transition system (the model M) grows exponentially in respect to how many variables are present in the system so, for very big systems, the number of states to-be-checked becomes intractable. This is known also as the “state explosion problem” [3]. The above reasons make the full-scaled appliance of formal methods in industrial projects infeasible.

Apply it where it matters

Apart from critical systems (e.g. medical software/hardware, automated pilot software or nuclear reactor management software), where full-scaled formal verification is the only option, it is our opinion that formal verification can be beneficial when applied in non-critical systems despite the extra cost. The idea is simple: formal verification should be applied only to the most error prone parts of the software and leave the rest for conventional testing techniques to deal with. Of course, the key here is to correctly identify which parts of the project are the most error prone. To answer to that question we first have to think about the people who actually make the software. No matter how experienced a programmer is, his mind has an upper threshold of how much information is capable of processing simultaneously. If this threshold is exceeded then the probability that a bug is introduced increases. For example a programmer is unlikely to make an error when writing output commands but the opposite is true when synchronizing threads. Given all this, we think that software complexity metrics [4] (such as cyclomatic/Halstead complexity, coupling, cohesion etc… ) must be utilized in a attempt to identify which software segments require deeper analysis and use formal methods only on them. The use of formal methods in these parts can be invaluable because they are able to reveal system inconsistencies and ambiguities that otherwise would be very difficult (or even impossible) to be discovered (e.g. a counter-example that contradicts a programmer’s assumption about the system). Finally, the insight gained from the application of formal methods may lead to some sort of refactoring which, in turn, could increase the system’s efficiency or extensibility.

Conclusion

In this article we presented the two main approaches of formal verification, their scalability problems and the way that industry should include them in software life-cycle. Complexity in code is the major factor that introduces difficult to detect bugs so the most powerful yet expensive verification techniques should be applied only on code segments where complexity metrics indicate potential danger.

References

[1]: M. Huth and M. Ryan, Logic in Computer Science: Modelling and
reasoning about systems. Cambridge University Press, 2004.
[2] J. Woodcock, P. G. Larsen, J. Bicarregui, and J. Fitzgerald, “Formal
methods: Practice and experience,” ACM Computing Surveys (CSUR),
vol. 41, no. 4, p. 19, 2009.
[3] A. Valmari, “The state explosion problem,” in Lectures on Petri nets I:
Basic models. Springer, 1998, pp. 429–528.
[4] Measuring software. A. Clark. University of Edinburgh, 2014