Continuous Delivery: An Easy Must-Have for Agile Development

Introduction

Everybody working in software development has heard about it when talking about software quality assurance: Terms that begin with “Continuous” and end with “Integration”, “Build”, “Testing”, “Delivery”, “Inspection”, just to name a few examples. The differences of these terms are sometimes hard to tell and the meanings vary, depending on who uses them. In this post, the easy implementation of Continuous Delivery is discussed.

For clarification, Continuous Delivery is defined as described by Humble and Farley in their book “Continuous Delivery”[1]. In this highly recommendable book, a variety of techniques (including all other terms mentioned in the previous paragraph) to continuously assure software quality are described.[1] Adapting these techniques does not require much effort nor experience and should be done in every software project. Especially in large-scale software projects, this technique helps to maintain high software quality.

Errors First Discovered by the Customer

In a software project with a lot of engineers working on the same code base, unexpected side effects of source code changes are very likely to result in erroneous software. If there are automated unit tests, most of these errors are detected automatically. However, unfortunately there are some unexpected run time side effects that only occur when the software is running on a particular operating system. In a normal development process, such errors are detected at the worst point possible: when the customer deploys or uses the software. This results in high expenses for fixing the issue urgently.

In order to prevent those kinds of errors, Continuous Delivery has developed. As Carl Caum from PuppetLabs describes it in a nutshell, Continuous Delivery does not mean that a software product is deployed continuously, but that it is proven to be ready for deployment at any time. [2] As described in [3], an article by Humble and Molesky, Continuous Delivery introduces automated deployment tests for achieving this goal of deployment-readiness at any time. [3] This post focuses on those deployment tests as it is the core of Continuous Delivery.

Implementing and Automating Continuous Delivery

To prove if software is working in production, it needs to be deployed on a test system. This section explains how to implement such automatic deployment tests.

Firstly, the introduction of a so-called DevOps culture is useful. This means a closer collaboration of between software developers and operation staff.[3] Each developer should understand the basic operation tasks and vice versa, in order to build up sophisticated deployments. Even though [3] describes this step as necessary, from my point of view such a culture can be advantageous for Continuous Delivery but is not mandatory for succeeding. It is not mandatory, because automated deployment tests can be developed without the help of operations, although it is certainly more difficult. More detailed information about DevOps can for example be found in the book “DevOps for Developers” by Michael Hüttermann [4].

Secondly, as explained in a blog post by Martin Fowler, [5], it is crucial to automate everything within the process of delivering software. [5] The following example shows a simplified ideal Continuous Delivery process:

  1. Developer John modifies product source code
  2. Test deployment is triggered automatically due to a change in the version control system
  3. Deployment is tested automatically, giving e-mail feedback to John that his source code breaks something in production
  4. John realizes he forgot to check in one file and fixes the error promptly
  5. Steps 2 and 3 repeat, this time John does not receive an email as the deployment tests do not find misbehaviour of the product.

For example, such a process can be automated completely easily with the software Jenkins[6] and its Deployment Pipeline Plugin. Detailed instructions for such a setup can be found in the blog post [7].

However, such a continuous process is not a replacement for other testing (Unit Testing etc.) but an addition to it. It is an additional layer of software quality assurance.

Steven Smith states in his blog post [8] that Continuous Delivery in an organisation requires radical organisational changes and is therefore difficult to introduce to a company. I disagree with that partly because it depends on the type of the specific company. If a company uses old fashioned waterfall-like development methods, Smith is right with that point. However, when concerning an agile developing software company, Continuous Delivery is nothing more than more automated testing. It does not require people changing their habits in this case, as the developers are used to Continuous Testing methods. The only additional work is to maintain deployment scripts and to write deployment specific tests.

Configuration Management Systems and Scripting

In order to perform deployment tests, scripts are needed for the automation. These scripts can be written in any scripting language, for example in Bash (shell-scripts). However, there are more sophisticated approaches using so-called Configuration Management Systems such as Puppet[9] or Chef[10]. According to Adam Jacob’s contribution to the book “Web Operations”, section “Infrastructure as Code”[11], the use of a Configuration Management System’s scripting language leads to the following advantages:

Firstly, such deployment scripts are declarative. That means that the programmer only describes what the system should look like after executing the script, without the need of describing how it should be done in detail. Secondly, the scripts are idempotent, so they only apply the modifications to the system that are necessary. Furthermore, executions of the same script on the same host always lead to the same state, regardless how often a script is executed. [11]

For these reasons, Configuration Management System’s scripting opportunities are superior to bash scripting. Furthermore, they provide a better readability, maintainability and a lower complexity of the scripts compared to similar Bash-scripts.

Conclusion

According to my software business experience, it is easy to implement Continuous Delivery step by step into an agile thinking company. The main things to focus on are the following: Firstly, such an implementation should be fully automated and integrated with the version control system. Secondly, a Configuration Management System is highly recommendable because of easier deployment scripting. Furthermore, such scripts provide better maintainability, which saves resources.

The goals achieved by the implementation of Continuous Delivery are twofold: Firstly, the release process is optimised, leading to the possibility to release almost automatically. Secondly, developers get immediate feedback when the source code does not work in a production-like environment.

In conclusion, Continuous Delivery thereby leads to crucially better software and can be introduced into an agile operating company without much effort.

References

[1] J. Humble and D. Farley, Continuous Delivery: Reliable Software Releases through Build, Test, and Deployment Automation, Pearson Education, 2010.
[2] C. Caum, “Continuous Delivery Vs. Continuous Deployment: What’s the Diff?,” 2013. [Online]. Available: http://puppetlabs.com/blog/continuous-delivery-vs-continuous-deployment-whats-diff [Accessed 2/2/2014].
[3] J. Humble and J. Molesky, “Why Enterprises Must Adopt Devops to Enable Continuous Delivery,” Cutter IT Journal, vol. 24, no. 8, p. 6, 2011.
[4] M. Hüttermann, DevOps for Developers, Apress, 2012.
[5] M. Fowler, “Continuous Delivery,” 2013. [Online]. Available: http://martinfowler.com/bliki/ContinuousDelivery.html [Accessed 2/2/2014].
[6] “Jenkins CI,” 2014. [Online]. Available: http://jenkins-ci.org/ [Accessed 2/2/2014].
[7] “Continuous Delivery Part 2: Implementing a Deployment Pipeline with Jenkins « Agitech Limited,” 2013. [Online]. Available: http://www.agitech.co.uk/implementing-a-deployment-pipeline-with-jenkins/ [Accessed 2/2/2014].
[8] S. Smith, “Always Agile · Build Continuous Delivery In,” 2013. [Online]. Available: http://www.stephen-smith.co.uk/build-continuous-delivery-in/ [Accessed 3/2/2014].
[9] “What is Puppet? | Puppet Labs,” 2014. [Online]. Available: http://puppetlabs.com/puppet/what-is-puppet [Accessed 2/2/2014].
[10] “Chef,” 2014. [Online]. Available: http://www.getchef.com/chef/ [Accessed 2/2/2014].
[11] A. Jacob, “Infrastructure as Code,” in Web Operations: Keeping the Data On Time, O’Reilly Media, 2010.

The Role of the Tester’s Knowledge in Exploratory Software Testing, done by s1340691 Eshwar Ariyanathan

Key Terms:

Exploratory testing(ET): It is an approach to software testing which involves learning of test design and test execution simultaneously.

Grounded Theory: It is a method by which a new discovery takes place by means of analysis of data.

Verification & Validation:

Validation means checking if the product meets the customers needs and verification means that whether or not the product compiles with regulation.

Test Oracle: is a method used to make a distinction between correct and wrong results during the process of software tesing.

Paper: Juha Itkonen, Mika V. Mantyla, Casper

Lassenius, “The Role of the Tester’s Knowledge in

Exploratory Software Testing “

IEEE Transactions on Software Engineering 11 Sept. 2012 . IEEE Computer Society PREVIOUS WORK:

The previous work shows that Exploratory Testing(ET) is being widely used by the software industry and there is a growing evidence that the industry testers see value in it. This

increasing interest in the software industry among the testers, in search for Exploratory testing, paves way for research questions in this domain.

RESEARCH QUESTIONS:

There is no clarity in the way how Exploratory testing works and why it is being used?

What types of knowledge are being used in exploratory testing?

How do the testers apply the knowledge for testing purposes?

What are the possible failures that are being detected by exploratory testing?

RESEARCH WORK:

A study was conducted under industrial set up where video was recorded for 12 testing sessions. In this approach the participants of testing (testers) were allowed to think while they were performing functional testing. The researcher occasionally asked for clarification about the testing process from the testers. After each session of testing process there were 30 minutes of interview session to discuss the results.

Grounded theory was being applied to find out what the testers thought and what type of knowledge was being utilised.

The way in which the testers found out the failures or errors by using their personal knowledge and without writing the test case descriptions is being discussed.

The knowledge that was being used in the process was classified as domain knowledge, system knowledge and general software engineering knowledge. It was found that the testers were using their knowledge as a test oracle to verify the correctness of the result and the knowledge was used as a guide in selecting the objects for test design

It was found that a large number of failures called windfall failures were found outside the focus area of testing and it was found using exploratory investigation.

The conclusion from the paper is that the approach used by exploratory testers clearly differs from the way test case based paradigm works.

RESULTS:

A number of results have been found out from this set of experiments which are as follows.

* The testers were found to be spotting errors in the code based on their personal experience and knowledge without writing test case descriptions.

*Personal knowledge was equated to be the combination of system knowledge, domain knowledge and general software engineering knowledge.

*From the experiments performed it was found that the testers applied their knowledge.

*The failures that were found in the test process were mostly found incidentally. (i.e most errors were found outside of the system that was being checked upon).

*Failures were being classified on the inputs or conditions that interact with that failure.

*Failures related to domain knowledge were based straightforward.

*Failures related to system are related to software engineering knowledge and are difficult to provoke.

POINTS OF AGREEMENT:

 I agree with the fact that this research work has taken a considerable amount of time to work upon with industrial collaboration and only experts have been employed with the research. So i agree with the results.

 The research has used a lot of experiments(12) before jumping into conclusions. So i agree with the results produced .

 20% of the failures found were windfall failures i.e these were found incidentally from the knowledge of the testers. I agree with the fact that exploratory testing helps in identification of failures in the code and provides new approach in finding defects

 45% of inputs or conditions used in the process were found to be creating failures. I agree with this fact because a considerable amount of time and effort has gone in finding the results.

POINTS OF DISAGREEMENT:

*Although the experiment has found out that exploratory testers were fruitful in finding defects or failures by means of knowledge, serious threats are being imposed on testing domain as the term ‘knowledge used in testing’ has to be more clearly defined. The important questions are that is it possible for only experts to do this type of testing?

Is it possible to do effective testing only if the tester is having previous work experience?

The process of acquiring knowledge is not being clearly mentioned as to how a novice could attain this knowledge.

*The term knowledge is not clearly defined and the process of attaining it is also not clearly defined by the researchers. So i still wonder whether this approach of testing would be technically feasible to implement testing?

*Moreover the exploratory testing cannot be used in industries as a replacement to the existing testing approaches as it would be very costly and requires only experts with many years of experience.

*From the experimental observations it is said that nearly 20% of defects or failures were found incidentally which proves that exploratory testing is useful. My argument is that no other testing approach was used for the experiment other than exploratory testing. So naturally with the usage of experts 20% of defects were found. If the exploratory testing is done as a comparative study with any other approach of testing then we could have a clear cut superior method of testing.

 The fact to be noted is that exploratory testers have more knowledge but can the method of exploratory testing be sufficient enough to function independently as a method of testing owing to its costly nature of requiring experts, time frame etc., are posing serious questions.

CONCLUSION:

The research has taken place with experts and has taken place under controlled manner of producing the necessary results but it has failed to answer the question of defining knowledge of exploratory testers and how to gain the knowledge.

Further research could be a comparative study of exploratory testing with prevalent methods of testing to identify its effectiveness.

REFERENCES:

[1] G.J. Myers, The Art of Software Testing. John Wiley & Sons, 1979.

[2] B. Beizer, Software Testing Techniques. Van Nostrand Reinhold, 1990.

[3] C. Kaner, J. Falk, and H.Q. Nguyen, Testing Computer Software.

John Wiley & Sons, Inc., 1999.

[4] L. Copeland, A Practitioner’s Guide to Software Test

Design. Artech

House Publishers, 2004.

[5] J.B. Goodenough and S.L. Gerhart, “Toward a Theory of Test Data

Selection,” IEEE Trans. Software Eng., vol. 1, no. 2, pp. 156-173, Mar. 1975.

[6] C. Andersson and P. Runeson, “Verification and Validation

in

Industry—A Qualitative Survey on the State of Practice,” Proc.

Int’l Symp. Empirical Software Eng., pp. 37-47, 2002.

[7] S. Ng, T. Murnane, K. Reed, D. Grant, and T. Chen, “A

Preliminary Survey on Software Testing Practices in

Australia,”

Proc. Australian Software Eng. Conf., pp. 116-125. 2004,

[8] E. Engstro¨m and P. Runeson, “A Qualitative Survey of Regression

Testing Practices,” Proc. 11th Int’l Conf. Product-Focused Software Process Improvement, 2010.

Comparing the Defect Reduction Benefits of Code Inspection and Test-Driven Development ,done by s1340691 Eshwar Ariyanathan

Introduction to key terms :

1)Test Driven development(TDD) : It is a agile software development methodology where the test cases are written first before the actual code gets written. This enables the developer to re factor the code and find bugs in the code in the initial stage of the development process.

2)Code Inspection(CI): It is a process of software inspection where experts analyse the source code to look for any bugs or errors in the source code and then rework is done by developers to correct the source code.

Authors:

Wilkerson, J.W.

Sam & Irene Black Sch. of Bus., Pennsylvania State

Univ., Erie, PA, USA

Nunamaker, J.F., Jr. ; Mercer, R , “Comparing the Defect Reduction Benefits of Code Inspection and Test-

Driven Development,” IEEE Transactions on Software

Engineering, vol. 38, no. 3, pp. 547-560, May-June

2012, doi:10.1109/TSE.2011.46

PREVIOUS WORK:

1) The previous research work shows that the code inspection has been researched on a lot of scenarios and industrial applications and has been shown to have superior results in terms of testing practices.

2) The scientists working on agile methodologies have a claim that Test Driven Development is a better approach of testing.

RESEARCH QUESTION:

To find out whether Test Driven Development(TDD) is a better approach or Code Inspection(CI) is a better approach of testing.

SUMMARY OF THE RESEARCH DONE:

The experiment was based on a programming assignment on spam filter in Java which was done by 40 undergraduate students. The students were divided into four groups for testing namely TDD group, CI group, TDD+CI group and last group which neither used TDD not CI for testing.

The result on final analysis taken into account had only 29 students and the others left before the assignment was over.

Code inspections were also performed by the same group of students using online collaborative tool. The students were given training for coding using Junit before the experiment was conducted.

The total defects found by TDD group and CI group were compared and analysed for producing the experimental results.

RESULTS PRODUCED FROM EXPERIMENTS:

*Code inspection was better at reducing defects when compared with Test driven development

*Code inspection + TDD was producing better results but not statistically .

*Code inspection was slower than TDD approach in finding the defects.

*TDD has to clearly defined as the procedure takes in a lot of comparisons and variations everyday.

DISAGREEING ARGUMENTS:

I have a lot of disagreeing points regarding the experimental results and the method employed for the experiment.

Firstly , the experiment concludes that Code inspection is better than TDD in terms of defect reduction.

I do not agree with this point because it has been given with just one programming assignment which is less than 600 lines of code and there are no sufficient justifications as to why Code inspection is better.

Secondly, the experiment is done with students of a university with varying levels of ability.

So my claim is that how can we accept a result from people who are neither experts in the field nor do they have done sufficient research to prove the validity of the experiment.

Thirdly, the time taken for the experiment is a week (less than a month in research standards.) When the time taken for an experiment is very less then jumping to conclusions is not at all acceptable.

My next point is that regarding test driven development the students who performed the experiment were just beginners.

They were just given some tutorial and lectures regarding JUnit before the experiment was performed. Clearly the students involved in the experiment were not experts in testing using JUnit.

My next argument is that the result states that Code inspection was able to detect 23% defects compared to 11%defects of TDD. But they have not mentioned as to what type of defects were being found out by the TDD group or the Code inspection group.

The other important point to be noted is that the students involved in the experiment were of varying levels of ability with respect to programming and methods of testing. So the deviation in results produced might have also happened because the students doing code inspection were better

testers compared to that of those who did Test driven development.

So the results from a small experiment done with people who were not experts and done in very less time without any background and without any clear set of parameters cannot be considered as a valid result.

But this experiment could be considered as a spark so that industry could take on with further research in this field to prove the effective approaches of testing.

CONCLUSIONS:

My conclusions are that the research working regarding this testing using should proceed with testing experts both in TDD and code inspection and should be done in controlled manner with sufficient amount of time.

The future research should also take in to account of different parameters involved with testing and on varying lines and complexity of code and then come to conclusion in finding a superior method of testing.

So as far as this research paper is concerned i am not agreeing with either the approach or the results obtained.

REFERENCES:

[1] G. Tassey, “The Economic Impact of Inadequate Infrastructure for Software Testing,” technical report, Nat’l Inst. of Standards and Tech nology, 2002.

[2] B. George and L. Williams, “A Structured Experiment of Test-Driven Development,” Information and Software Technology, vol. 46, no. 5, pp. 337-342, 2004.

[3] E.M. Maximilien and L. Williams, “Assessing Test-Driven Development at IBM,” Proc. 25th Int’l Conf.

Software Eng., pp. 564-9, 2003.

[4] D.L. Parnas and M. Lawford, “The Role of

Inspections in Software Quality Assurance,” IEEE Trans. Software Eng., vol. 29, no. 8, pp. 674-676, Aug.

2003.

[5] F. Shull, V.R. Basili, B.W. Boehm, A.W. Brown, P.

Costa, M. Lindvall, D. Port, I. Rus, R. Tesoriero, and M.

Zelkowitz, “What We Have Learned about Fighting Defects,” Proc. Eighth IEEE Symp. Software Metrics, pp. 249-58, 2002.

[6] M.E. Fagan, “Design and Code Inspections to Reduce Errors in Program Development,” IBM Systems J., vol. 15, no. 3, pp. 182-211, 1976.

[7] N. Nagappan, M.E. Maximilien, T. Bhat, and L.

Williams, “Realizing Quality Improvement through Test Driven Development: Results and Experiences of Four Industrial Teams,” Empirical Software Eng., vol. 13, no.

3, pp. 289-302, 2008.

Using Social Networks to Analyse the Stakeholders of Large-Scale Software Projects

1- Introduction


1-1- Stakeholders?! Mmmm I heard this word before!
Whatever your software project is, you have stakeholders! If there are no stakeholders, then how and why to do it?! All the people who are engaged in a software project are called “Stakeholders”. Stakeholders are the ones who affect and be affected by the actions in the system we are building. Employees, Customers, students are all examples of stakeholders. The importance of stakeholders comes from their knowledge of the processes we are working on.
 
1-2- Ok? So What’s the point?
A lot of software projects fail just because people ignore or neglect the role of stakeholders. Based on reports [1], ignoring stakeholders is the most common reason for failures in software projects.  I don’t want to spend the whole time convincing you how important stakeholders are. JUST BELIEVE ME and let’s continue.
 
1-3- So stakeholders are important.. then?
It is once said, “Not all your fingers are the same!”. The problem isn’t really just about ignoring the role of stakeholders, but it is about dealing with them all in the same way. They aren’t the same! How can I deal with the manager of the organisation just like I deal with the concierge?! Yes they both interact with the system but not in the same manner and not with the same influence. The main idea is about building a social network where nodes are the stakeholders and links between them are weighted recommendations.
 

2- Using Social Networks to represent stakeholders and their importance: More details about the approach


2-1- Ok! So it is about building a weighted graph? How to put the weights?
Yes! We will represent the stakeholders and their influence by using a graph. Weights will be assigned by allowing each stakeholder to recommend other stakeholders.
 
2-2- I just can’t get the idea! Why to build a social network?
When we build a social network we can simply use all the Graph properties and the social network measures.
 
2-3- Well! Things will be clearer with an example!
Let’s take a simple example! We have 4 stakeholders, Alice is the manager of the system, Bob is a programmer, Carl is a post-graduate student and Dave is an undergraduate student. Every one of them recommends the people with whom he/she interacts. These recommendations take the form:
<Stakeholder, Role, Salience> where:
–         Stakeholder: his/her name
–         Role: the role of the supervisor
–         Salience: ordinal number represents the importance of this person (Let’s say it’s between 1 and 5 where 5 is the most important).
For example, Alice recommends both Bob and Carl as:
<Bob, programmer, 4>
<Carl, PostGradStudent, 2>
And so on for all others. After writing the previous triples for all we build the network.
unnamed
2-4- So? We have a cute social network! But how to use it?
Simply, we will priorities our stakeholders using social network measures such as:
          2-4-1- Betweenness Centrality: it ranks a stakeholder based on its role as a “broker” between various stakeholders. It counts the number of shortest paths in the network that go through this stakeholder. For the previous example, Carl is the most central one as both Alice and Bob use him to go to Dave.
         2-4-2- PageRank: it is Google’s base algorithm for ranking search results. By simple words, the importance of a node (stakeholder) is determined by the importance of the nodes referring to it. In the previous example, Alice and Dave have a link with the value (5) going in. However, Alice is more important in this measure because it is recommended by Bob who has a higher rank than Carl who recommends Dave! (since Bob has a higher value of recommendations going into him!) The same as in the web, where the importance of a website is really determined by the importance of the websites that point to it!
         2-4-3- Other Measures: I don’t want to turn my blog post into a graph measures lecture! So I won’t list all of the things you can do and measure but some other ideas are related to the total number of arrows and their values (degree of a node, in-degree, out-degree, etc…).
 
2-5- mmmm.. that’s cool! But which measure to use?
Actually, it depends on the problem we are solving and on the domain. We may use a mixture of these. I don’t want to go into the details of this but it is interesting! Lots of papers are about this. Just google it!
2-6- Has anyone tried this method practically?
YES!! StakeNet is a tool defined and used by people from the University College London (UCL) with a LARGE-SCALE SOFTWARE PROJECT with a 30000 user [2]. Identifying and prioritising stakeholders are done by this tool for the new UCL access control project. StakeNet performs better than other usual methods to determine the importance of stakeholders.(More details in the paper[2] about the evaluation process).
 

3- Conclusion


Well, in my eye using a social network to show the stakeholders and their relations is extremely useful. The first reason is that you won’t ignore them and the second you will give them priorities. Another important point is allowing all the stakeholders to have an opinion in determining other people importance.  Using this representation for large-scale software projects will be a great thing for all these reasons.
 

– The example and image idea are from the reference [2]. I re-draw it in a more “cool” way 😉

 

[References]
1- D. C. Gause and G. M. Weinberg. Exploring Requirements: Quality Before Design. Dorset House Publishing, 1989.
2- Lim, Soo Ling and Quercia, Daniele and Finkelstein, Anthony, StakeNet: using social networks to analyse the stakeholders of large-scale software projects, Proceedings of the 32nd ACM/IEEE International Conference on Software Engineering-Volume 1, ACM, 2010.

(Re)Estimation: A brave man’s task

So the deal is closed, the client has just ordered a large-scaled software package and the company directors summon the project manager asking him for an estimation over the time and human resources required for the project’s successful implementation. In addition to the numerous variables that must be taken into account for the estimation, the manager must come with an answer which abides by the resource constraints imposed by the upper-level management and have to do with the company’s policy and/or specific commitments to the client. Based on the work of Frederick P. Brooks JR. [1], in this article we present a fundamental misconception of managers regarding cost estimation and we argue that there are two circumstances where a good manager should be brave enough to dispute the imposed constraints. Finally, we discuss the advances in software engineering research concerning resource estimation which can aid the manager in supporting his opinion.

Being pressurized by their superiors, many managers fall into the trap of treating manpower and time as interchangeable components (hence the title of [1]: “The Mythical Man-Month”). In software development process there is not necessarily a one-to-one proportional relationship between human resources and development time due to sequential bottlenecks. No matter how many programmers are assigned to a project, possible dependencies between sub-tasks could act as an inhibitory factor for reducing the development time (at least by a percentage analogous to manpower). One could argue that, with proper planning, the workforce could initially concentrate its efforts towards the core components of the system and then work on extensions that are independent. This is wrong because of another factor that affects productivity: coordination. The number of communications among programmers working on the same task increases exponentially as more people are added to the group and a large amount of time is wasted in trying to resolve disagreements and ensure that all members follow the same strategy when developing. So a good manager should have the courage to present an initial time estimation that violates the one imposed by the board if he envisages that the development process is infeasible within the time restriction, regardless of the manpower available. By doing so, he ensures that there will be no future changes in the schedule.

But what if the manager hesitated to disagree with his superiors and accepted the requirements? What is he supposed to do now when the first landmark has not been achieved and the irrationality of the initial estimation is becoming more and more apparent? This is another chance for him to show off his mentality and propose a radically altered estimation. Ideally, the new estimation should vary a lot from the initial for two reasons: Firstly, there must be enough time for all the development stages including possible implementation changes in case of future test failures. Secondly, it is essential that no other rescheduling takes place. Making continuous changes in the schedule is frustrating for the board, the customer and, more importantly, the developers. This re-estimation can be effectively combined with a reduction of the task where the developing team will only focus on the essential functionality of the system and postpone any extra stuff for later releases. The easy solution of adding programmers to the project can be disastrous for the same reasons as before plus one more: the newcomers would need time get into things and understand the concepts of the project. Not only these newcomers are not going to be productive for some time (typically weeks) but the remaining staff will bear the burden of their training. As the writer states in [1]: “adding manpower to a late software project makes it later” (this is known as the Brook’s law).

In the early days of professional software engineering, project managers were in a far worse position than today when it comes to resource estimation debate. Back then, there were no standard cost estimation methods and a manager had only his experience and intuition to protest against the non-technical and purely managerial-oriented arguments of his superiors. The year 1981 is considered a landmark for software project estimation because of the introduction of the COCOMO (COnstructive COst MOdel) in the book “Software Engineering Economics” by Barry Boehm [2]. It was the first time that software engineering was approached systematically from an economic perspective. Since then, a range of models have been developed for the resource estimation including COCOMO II [3] , an updated version of COCOMO designed to operate on state-of-the-art projects. Despite the progress that has been made in the field, these models are not a panacea when it comes to resource estimation. There are powerful tools in the manager’s archery but, given the peculiarities of each individual project, human judgment is still a crucial factor for estimation. The project manager still needs to be brave.

In this article we demonstrated how irrational requirements from non-technical personnel combined with a fundamental misconception of the project manager are able to affect software’s development cycle. We presented the reasons why a manager should be realistic regarding his resource estimation even if that means a conflict with his superiors and that nowadays he is able to partially support his opinion with commonly accepted tools.

[1]: Brooks, Frederick P. The mythical man-month. Vol. 1995. Reading: Addison-Wesley, 1975.
[2]: Boehm, Barry W. “Software engineering economics.” Software Engineering, IEEE Transactions on 1 (1984): 4-21.
[3]: Boehm, Barry W., Ray Madachy, and Bert Steece. Software Cost Estimation with Cocomo II with Cdrom. Prentice Hall PTR, 2000.
[4]: Pyster, Arthur B., and Richard H. Thayer. “Guest Editors’ Introduction: Software Engineering Project Management 20 Years Later.” Software, IEEE 22.5 (2005): 18-21.

Introduction

This is the class blog for the Software Architecture, Process and Management course session 2013-2014 for Informatics at the University of Edinburgh. Students taking the class will all post three or more blog posts related to large-scale, long-term software projects. The course web page can be found here.

It is hoped that each blog-post will evoke lively, but polite, debate in the comments sections.