The effectiveness of agile practices for embedded software projects according to industry

Why Embedded software?

As ubiquitous computing pushes the Internet of Things revolution, embedded software development is starting to be placed under more pressure and is facing a new challenges. We are seeing impressively short time to market life cycles, with products that pack ever more computing power and with higher demands from the customers that seem to be ever changing. The scenario is one we would clearly associate as requiring an agile software development method, however embedded software has been know to be reluctant to embrace this rather recent approach to development as opposed to more traditional approaches which it inherited from the hardware product life cycle.

What are the problems for embedded software development to go agile?

Embedded software faces specific hurdles that limit the application of agile practices, including the difficulty (if not outright impossible) to apply release of upgrades or fixes to embedded software, the fact that the new release of software must still be compatible with a certain range of hardware that has already been supplied to the customer, and well as the close ties to the actual development of the hardware. It has been suggested that due to these limitations, embedded software companies would have to make adjustments to the agile practices in order to better accommodate their needs. However, one pitfall in such an attitude is that the easy practices are adopted, and those that would have really brought a change to the company are not adopted using the excuse that they need customization [1].

What is the feedback from industry ?

In the paper [2], O. Salo and P. Abrahamsson present the results of a survey carried out across 13 embedded software development companies that have shown interest in adopting agile practices. Since the sample is know to be biased in favour of adopting agile methods, the authors present this work not as a survey of the whole industry, but rather as a means to better understand the usefulness reported by such companies of their experience of agile. The survey investigated the actual implementation of two agile methods, Scrum and XP, and the perceived usefulness of the methods amongst project managers and software developers of large, medium and small companies.

Results

One clear point that emerges from the survey is that Extreme Programming (XP) practices were more popular than Scrum, with 54% of the responses stating that the practices were systematically, mostly or sometimes applied, as opposed to 27% for Scrum for the respective range. However, a clear point to note is that knowledge of Scrum practices seems to be less widespread than XP, with the percentage of “never” and “I don’t know” for scrum practices being much higher than that for XP.

Another possible reason for this discrepancy is the difference between XP and Scrum, where XP is presented as a set of 12 practices that can be implemented somewhat independently of each other. On the other hand Scrum is a whole “process framework”, and is not easily implemented successfully as separate parts, making adoption a more disruptive move. By definition Scrum should NOT be implemented in separate parts, as this undermines the whole framework. [3]

The feedback regarding the experience was positive in both XP and Scrum, however even here, the experience of XP seemed to have more positive feedback (90%) rather than Scrum (70%).

One particular comment by the authors that struck me as strange was that they claim that the feedback regarding actual experience proved more positive than the predicted positive experience by those who had not yet implemented agile techniques. They use this to argue that agile techniques are better than perceived before adoption. I find it hard to draw this conclusion, when this could be clearly biased, since people who have negative expectations of a technology will probably not take it up, resulting in the higher negative prediction. For all we know, these companies may be justified in their belief since the practice may not be directly effective for the company/project.

Comments about the research.

What appears clearly from the research is that companies implement some of the agile principles more than others. While this may work for XP, it is strange for Scrum since it should be implemented as a whole, rather than just parts. (It was split according to scrum meetings, which may not make sense using just a few of them). However this may be a clear indication that the agile techniques need to be modified to cater for the embedded software development, since some principles seem to be more beneficial than others.

Another point to note was that Test-Driven Development was one of the least techniques that was implemented from the XP techniques. I find this surprising, since Test-Driven Development would greatly aid the identification of problems very early in the implementation stage, and has been identified as a means of increasing the reliability of code developed. I wonder whether this percentage remains through today, 5 years after this survey was carried out.

I also find interesting the range of risk that the respondents identified. Although the authors did not discuss this in detail, I believe that the risk assessment of the software may greatly influence the way and extent that agile software development is implemented. In fact in this survey, most projects involved software that could potentially cause loss of discretionary funds or just loss of comfort. Only 2 projects involved potential loss of life or more. I believe that it would be of interest investigating the correlation between the risk involved and the way the company adopts agile techniques, as I speculate that it would be very hard for companies that produce high risk equipment/embedded software to change long standing development methods with stringent stages and control to a more flexible and less documented agile technique.

Conclusion?

I believe that due to the advancement in both software development methods, as well as the rapid increase in smarter devices that surround us, it would be very interesting to carry out this research once again now, 5 years down the line, and observe current trends, with clearer indication of the complexity, scale and risk associated with the embedded software. However, I believe that this paper gives enough proof that projects that used agile techniques for embedded software, usually considered conservative, proved useful and a positive experience.

References

[1] B. Boehm, “Get ready for agile methods, with care,” Computer, vol. 35, no. 1, pp. 64–69, 2002.

[2] O. Salo and P. Abrahamsson, “Agile methods in european embedded software development organisations: a survey on the actual use and use-fulness of extreme programming and scrum,” Software, IET, vol. 2, no. 1,pp. 58–64, 2008.

[3] K. Schwaber and J. Sutherland. (2011) The scrum guide–the definitive guide to scrum: The rules of the game.

Easy facts about LOC used in software measurement

“If you cannot measure it, you cannot manage it.–Peter Drucker, management consultant

Measurement is an essential step to achieve effective management. People are aware of it and they use metric sets to measure projects. Compared with the elements that are hard to measure like qualitative information, they tend to focus on quantitative information like numeric quantity which can be measured easily. However, easily measured elements might not be as important as or more important than harder ones in terms of its contribution to project evaluation.

This is the same with software development. When we measure a software project, we need to first find a quantifiable metric. A most intuitive answer is to use the code line by line. In fact, lines of code (LOC) are widely used to measure the size of software and sometimes to evaluate the productivity of developers [1]. However, before we use it, we might need a minute to think carefully whether this is a good metric.

Is it an easy thing to count code lines?

If we use LOC to measure our software project, first we need to count how many lines of code are there. It seems that this is quite an easy thing, just by counting the lines of each file or each module. Here, question is how could we count precisely? In a large-scale software project, code is modified as requirement changes or bugs found and it may happen during development stage or even after the project is released. Which pieces of code are written in the development stage? Which pieces are added or deleted when we deal with bugs? People may suggest using simple maths computation to solve this problem: to count how many lines of code are changed, how many lines are newly added and then add them together. But for the “changed code”, we have to notice that it happens because developers add some new functions or modify bugs. So it is difficult to count exactly how many lines of code are there especially for those who are not quite familiar with the project.

Another problem with LOC…

There are several methods to calculate how many lines of code are there, for example, counting executable lines of code, data declarations, lines that are not null, etc. Different corporations or even different teams might not use the same counting rule. It’s meaningless to compare size of the projects or productivity of developers by using LOC when they are calculated on different standards.

Okay, assuming we could count code lines precisely and using the same counting method.

With the precious number of lines, companies could use it to evaluate the productivity of a programmer. Banker and Kauffman put up with a formula to compute the productivity after they did some research on software life cycle productivity [2].

Productivity = (Size of application developed) / (Labour consumed during development)

However, in real life, developers might be working in two projects in the same time; some of them might be responsible for only half or one quarter of the whole project. For these developers, the productivity calculated using the formula mentioned above would be much lower than those who work full time on one project. But we cannot simply come to the conclusion that they have lower productivity compared with others.

Another fact we need to notice is that there are thousands of programming languages. When LOC was first introduced as a metric, the most commonly used languages are FORTRAN, COBOL and Assembly language. For these languages, it is easy to count the lines. Later, high-level languages like JAVA, C++ appeared. In these languages, a single statement generally results in many machine instructions. It means that high-level languages might use fewer lines of code to implement a function. And it is also not meaningful to compare the number of lines of code developed using structured languages versus object oriented techniques [3]. For example, a system could be implemented with 6000 lines of Assembly code or 3000 lines of C++ code. Suppose a developer using assembly language writes 600 lines of code per month while a C++ developer writes 300 lines of code per month. It seems that the former developer has high productivity over the latter one. But when the two developers are developing the project mentioned above, they might finish it using the same amount of time. Advanced programming languages have strong expressive power. The more powerful the language is, the lower productivity it shows for its developers. So we can’t simply decide which developer has higher productivity simply based on these numbers.

What about clients? Is LOC meaningful to them?

Some corporations especially multinational ones would prefer to outsource different parts of the system like designing, coding and testing to different organizations. Clients care about the function [4]. Only the business value brought by these function is meaningful. LOC is not a metric that could be used to evaluate how meaningful the function is. More code does not necessarily suggest that the function is robust or the business value is large. In contrast, it might mean that there is code redundancy in the project. Besides, standard template library, class library, and software development kit like visual programming language are widely used today. As a result, many pieces of code are generated automatically. There also might be automatic configuration script and user profile in the code. As a result, it’s meaningless to measure the projects using LOC.

Conclusion

Every coin has two sides, so is the metrics used in software measurement. On one hand, metrics can be used to control and adjust development processes. On the other hand, developers might pay more attention to the elements that can be easily measured while ignoring the elements that are important but hard or impossible to measure. LOC is a most intuitive metric used in measuring both the size of the project and the productivity of the developers. However, there are several problems with it including LOC is difficult to measure, there is no agreed standard, and LOC is dependent on programming language, etc.

To achieve effective measurement, we should ask ourselves what is the final goal for the project? Is it to implement functions on time and within budget, to deploy the product massively and serve large amount of clients or something else? Based on the answer, we need to think about what metrics should be used to measure part of the project, what should be used to measure the whole project, what about the short term metrics, and what for the long-term goal. LOC is definitely not a good one, but what about others? World changes rapidly, are other metrics always good ones? Probably not, and those are all things that we need to explore.

References

[1] Clapp, J. (1993). Getting started on software metrics. Software, IEEE10(1), 108-109.

[2] Sudhakar, P. G., Farooq, A., & Patnaik, S. (2012). Measuring productivity of software development teams. Serbian Journal of Management7(1), 65-75.

[3] Boegh, J., Depanfilis, S., Kitchenham, B., & Pasquini, A. (1999). A method for software quality planning, control, and evaluation. Software, IEEE16(2), 69-77.

[4] Jeffery, R., Curtis, B., & Metrics, T. (1997). Status report on software measurement.

Earned Value Management is not a mathematical game

Introduction

During the development process of a large software project, the project management team has to constantly track the implementation of its plan and calculating the deviation of its schedule and budget. An outstanding progress tracking method can easily control the software development risk. Certainly, Earned Value Management (EVM) is a common method, which will comprehensively consider the scope, cost and schedule of projects (Robert, 2007) [1]. Moreover, it will help the project managers to evaluate the project performance and the developing trend.

In general, the EVM principle might suitable for any project within any industry. However, software projects are extremely special because it is quite difficult to determine their requirements and design. Additionally, their budget and schedule cannot be changed frequently. Thus, these characteristics tend to increase the complexity of EVM implement. Accordingly, the research and application of EVM will be limited, unclear, abstract and inscrutable (Scheid, 2013) [2]. This article will analyze EVM in a relaxed, simple and practical way.

To begin with a story

The following story might help readers understand the basic of EVM.

Assume that the requirement was to cut down 800 trees within 10 days and the current resource was a woodcutter who can cut 10 trees down per hour. Soon afterwards, the project manager determined the project plan, which was that the woodcutter should work 8 hours per day and for each working hour, he could get £‎10. To sum up, 80 trees could be felled down every day and the total duration would be 10 days and the cost would be £‎800.

According to the analysis, even the simplest plan requires the consideration of these factors:

  • The number and the ability of employees. (1 woodcutter, 10 trees per hour)
  • The daily working hours. (8 hours per day)
  • The daily situation of project completion. (80 trees per day)
  • The total duration and cost of the project. (10 days, £‎800)

After planning, the project began. The actual situation of the first three days might be this.
[DAY1] The woodcutter spent 8 hours to cut down 80 trees.

[DAY2] The woodcutter spent 8 hours to cut down 100 trees.
[DAY3] The woodcutter was really tired, he spent 8 hours to cut down 60 trees.

In particular, these content will be concerned during the project tracking.

  • What is the difference between the planning time and the actual working time.
  • What is the difference between the planning progress and the actual progress.

For example:
[DAY1] The actual progress completely meets the plan.

Completion rate: 100%.
[DAY2] The actual duration is as same as the plan and the task is over-done.
Completion rate: 125%.
[DAY3] The actual duration is consistent with the plan, but the task is unfinished.
Completion rate: 75%.

Unfortunately, the following progress is not optimistic.
[DAY4] The woodcutter was sick. He only worked 4 hours and cut down 20 trees.
[DAY5] The progress was more terrible. The woodcutter worked 4 hours and cut 10 trees.
[DAY6, 7, 8] The woodcutter applied for sick leave.
[DAY9] The woodcutter returned to work. He worked 8 hours and cut down 80 trees.
[DAY10] The project is seriously behind schedule. So, two new woodcutter were hired by the manager. Three employees worked 12 hours (4 overtime hours, requires double pay) and cut down 360 trees.
[Final Result] 710 trees were cut. The total cost was £880. Therefore, The project cannot be completed before the deadline and it was over budget.

Actually, for the general projects, the salary can be determined in two different ways.

  • Based on the working time. (£10 per hour)
  • Based on the working progress. (£1 per tree)

Undoubtedly, any wise managers will not consider the first one. Because it will highly reduce the work efficiency and another one can certainly control the total cost. Nevertheless, for software projects, the salary cannot be calculated by the immeasurable progress. Typically, most software companies will use the working hour to count the salary. That is why in this story, working time was the salary standard.

As a result of this, project tracking is a status measurement of the project milestones, tasks and activities [3]. Managers have to continuously monitor the completion status by collecting and calculating the statistical data. Thus, the benefits of EVM are more and more outstanding.

Three Basic Elements: PV, AC, EV

Based on the story, the following three project tracking points must be focused.

  • The completed situation of tasks in the plan.
  • The actual situation. Such as the current usage of time and cost.
  • The completion situation of tasks in actual.

However, neither the customers or boss will pay their attention on whether the employees are sick or not; whether the programmers work overtime or not. In general, most customers only care about that can the requirements be delivered on time. The amount of project benefit achievement will be the greatest concern of the boss. For that reason, the three basic elements of EVM are using money as their quantitative unit. In other word, PV, AC and EV respectively quantified the three above points [4].

Based on the felling story.
The Planed Value (PV) is 8 hours * 10 days * £‎10 = £‎800.
4 days later, the actual work of the woodcutter is 8 hours + 8 hours + 8 hours + 6 hours = 30 hours. So the Actual Cost (AC) is 30 hours * £‎10 = £‎300.
Finally, due to the task is to cut down 800 trees and only 80 trees + 100 trees + 60 trees + 20 trees = 260 trees have been cut down. So the completion rate is 260 trees / 800 trees = 32.5% and the Earned Value (EV) is 32.5% * £800 = £260.

In summary, PV responses the planning of tasks. AC indicates the actual investment of tasks. EV displays the actual completion situation of tasks [4]. On the other hand, money might not be a suitable unit of measurement. Because every employee has their own salary standard, which is confidential.

There are two ways to solve this issue.

  • Comprehensive cost

Putting the various cost together, such as the utility bills, the site fees, the salary of all employees, etc. Then equally divided it to every staff who will directly involve in the project development.
For example, a company needs to pay the following costs every day, namely, £50 for utility bills, £150 for site fees, £800 for salary, £600 for the others. Moreover, there are 9 programmers and a manager who are working for the project. And they will work 8 hours per day. So the comprehensive cost is (£50 + £150 + £800 + £600) / 10 people / 8 hours = £20 / hour.

  • Working hours

Using working hours to express these values and do not transform them into costs.

Moreover, a project will include several tasks and each task will have their own PV, AC and EV values. The initial state of these three values is zero. As the project progressed, they will keep accumulating and increasing.
The following figure shows four different situations of task completion:

PV-AC-EV

Four Measurement Value: CV, SV, CPI, SPI

These measurement indicators will predict the development trends of cost and schedule [4].

  • Cost Variance: CV = EV – AC
  • Schedule Variance: SV = EV – PV

If the result is zero, the progress will be completely consistent with the plan. The positive result indicates that the cost is under budget and the actual progress is ahead of schedule. The negative result indicates that the cost is over budget and actual progress is behind schedule.

  • Cost Performance Indicator: CPI = EV / AC
  • Schedule Performance Indicator: SPI = EV / PV

If the result is one, the progress will be completely consistent with the plan. If the value is greater than 1, then the cost will be under budget and the actual progress will be ahead of schedule. If the value is smaller than 1, then the cost will be over budget and the actual progress will be behind schedule.

In simple terms, the higher the numbers are, the better the results are.

For examples (still based on the felling story):
The PV, AC and EV have been figured out in the previous section. In detail, four days later, the three basic elements of the project are: PV = £800, AC = £300 and EV = £260. So, the four measurement indicators can be calculated:
CV = £‎260 – £‎300 = -£‎40
SV = £‎260 – £‎800 = -£‎540
CPI = £‎260 / £‎300 = 0.867
SPI = £‎260 / £‎800 = 0.325
Therefore, after four days, the project was over budget and seriously behind schedule.

Estimate At Completion (EAC)

Undoubtedly, every stakeholder wants to know the final cost of projects. EAC can predict the completion cost during the development process.

The estimation formulas of EAC are [4]:
EAC = AC + Future Cost
Future Cost = Unfinished Work / CPI
Unfinished Work = Budget at Completion (BAC) – EV
The formula simplification is:
EAC = AC + (BAC – EV) / CPI = AC + (BAC – EV) / (EV / AC) = BAC * AC / EV

Thus, if the current CPI can be kept, the project final cost will be as same as this EAC result. According to the CPI definition [5], managers should ensure that the CPI value is greater than or equal to 1.

For examples (again, the felling story):
After the first four days, AC = £300, EV = £260 and BAC = £800. So, the final cost of the project can be estimated: EAC = £800 * £300 / £260 = £923. Hence, this manager will face to the discontent of his boss owing to the additional cost.

Idealization situation

If the poor manager in the story knows the EVM method, he might predict the final situation before day 5 and discover an appropriate method of risk response.
Then, the ending of this story may be changed.
[DAY5] Firstly, let the woodcutter go home to take a rest. Secondly, looking for new employees. Thirdly, changing the project plan and adding an incentive method (if the daily progress of each worker is more than 80 trees, a £5 reward will be paid for every 10 additional trees).
[DAY6] Hired two new woodcutters, they worked 8 hours and cut down 240 trees. The additional bonus was £40.
[DAY7] They worked 8 hours and cut down 240 trees. The additional bonus was £40.
[DAY8] They worked 2 hours and cut down 60 trees.
[Final Result] The project was completed and it could be delivered to the customer two days before the deadline. The total cost of the project was £720, which was £80 lower than the budget.

Actual situation

The above discussion supports that EVM can digitize and visualize the project management. It will make every beginner extremely excited. Nevertheless, the actual result and the practical effect are quite hard to meet these positive ideas.

The common situations will be shown as follows.

  • Incorrect project planning or terrible project tracking

Numerous software projects not only forget to create the complete record, but also lack a detailed plan. That is why the three basic values (PV, AC and EV) cannot be counted. Thus, the project tracking might never achieve by using EVM.

  • Lacking an effective and simple method to collect the actual data

PV can be counted by using the ‘Microsoft Project’ and EV can be calculated by updating the progress record. However, AC will be really difficult to collect. Because the project members, such as programmers cannot easily complete the statistics of their actual work.

  • Software projects are special

Because the requirements of software projects are always changing, the project plan cannot be fully confirmed at the beginning and the PV value will often be adjusted. Moreover, AC and EV will also fluctuate frequently. So the measurement values (CV, SV, CPI, SPI and EAC) are uncertain. Consequently, the benefit of EVM is not obvious.

  • Additional workload

The project plan will include its development plan, coding plan, testing plan, training plan, etc. They will be responded by different employees. If the purpose is to track the PV, AC and EV values, first the managers have to unify and quantify them. Undoubtedly, it will bring a large number of additional workload.

  • Require professional knowledge of software engineering

EVM also requires some professional knowledge of software engineering. For instance, the manager needs to know the work breakdown structure (WBS) [7], which is a hierarchical breakdown structure of all tasks. In addition, the project main schedule (PMS)[8] will always be shown as a Gantt Chart, which relates to the task progress. However, most software companies will only focus on the coding ability of their programmers. This will increase the difficulty of software engineering management.

Make Earned Value Management useful and practical

Back to the fundamental question: why use EVM? The above discussion proves that the higher the measurement values are, the better the situations are. However, is it always positive when CPI and SPI around 100%? Is it always favorable when CV and SV greater than zero? In fact, these numbers cannot clarify all the issues. Although they might be positive and large, the project will still encounter several serious troubles. Such as its plan might miss a critical task and then the PV value will become meaningless. Actually, the purpose of EVM is to compare the plan and the implementation. So a useful EVM must require a correct project plan [6].

Therefore, the project success tends to be impacted by these main factors, which are the ability and the expertise of all members, the orderliness and the preciseness of whole project development, the complexity and the innovation of every requirement, etc. In particular, the essence of EVM is to understand the meaning of PV, AC and EV, rather than to pursue the precise values which are calculated by the mathematical formulas. EVM is not a mathematical game. All calculated values must be supported by numerous objective facts. Otherwise, unclear results are better than the accurate values and abstraction will be better than quantification.

In conclusion, based on the specificity of software project development, the three basic EVM elements can be summarized and analyzed as follows.

  • Planed Value
    Reducing the PV value is the most effective way of project success. Based on the requirements that are uncertain and changeable, design and coding progress cannot be clearly determined. Hence, every clever idea of the programmers will solve many problems and decreases the workload. Moreover, the usage of advanced tools, the reuse of software and the choice of correct programming language will also be helpful.
  • Actual Cost
    An efficient working state can reduce the AC values. It will be improved by using several methods, such as incentives, skill training, technical meeting, expert advice, etc.
  • Earned Value
    An obvious standard of every task completion will increase the EV values. So managers should break down the large task into several small tasks and try to avoid long duration tasks. Futhermore, the completion rate can be simplified into two states, namely, completed (EV = 100%) or not completed (EV = 0%).

To end with the same story

If the manager of the felling story can understand the real purpose of EVM, he could consider these issues at the start of the project.

  • How to reduce PV by using clever methods. (Use chainsaw as new tool)
  • How to decrease AC by improving the work efficiency. (Training in chainsaw usage)
  • How to improve EV by clearly defining the tasks in details. (Arrange daily task)

According to these considerations, the story will be changed like this:
[DAY1] Firstly, analyzing the project requirement and made a plan. Secondly, bought a chainsaw, which cost £80. Thirdly, hired a woodcutter then spent additional £20 for a two hours chainsaw training. Finally, arranged the daily task: cut down 100 trees in 6 hours. Daily result: task complete.
[DAY2] Daily task: cut down 200 trees in 8 hours. Daily result: task complete.
[DAY3] Customers changed their requirement, 200 additional trees had to cut down within the project duration. Daily task: cut down 220 trees in 8 hours. Daily result: task complete.
[DAY4] Daily task: cut down 240 trees in 8 hours. Daily result: task did not complete, the woodcutter cut down 220 trees.
[DAY5] Daily task: cut down 220 trees in 8 hours. Daily result: task complete.
[DAY6] Daily task: cut down 40 trees in 2 hours and then spent 6 hours to test the wood of 1000 trees. Daily result: task complete.
[Final Result] Although the requirement had been changed, the project still successfully completed in only 6 days, which meant that it could be delivered four days before the deadline. In addition, the total cost was: £10 * 8 hours * 6 days + £80 + £20 = £580, which was £420 lower than the budget.

Summary

As a consequence, according to this simple and funny felling story, which throughout the whole article. EVM has been obviously defined and analyzed. In short, EVM does not only represent the mathematical formulas and values, but also plays a distinguished role in project tracking. This article describes EVM by using a practical and comprehensive way, in order to give a positive feeling to all people who are interested in software engineering management. Without doubt, EVM is not complex, boring and useless. It is the method that can effectively track the progress of software development.

Bibliography

[1] R.A. Marshall, “The Contribution of Earned Value Management to Project Success on Contracted EffortsJ. Contract Management, pp. 21–33, Summer, 2007.
[2] Scheid, J. (2013), How Earned Value Management is Limited.
[3] JB Danforth Company Proprietary, Project Tracking and Control.
[4] Nagrecha, S. (March, 2002), “An introduction to Earned Value Analysis
[5] Dummies, Earned Value Management Terms and Formulas for Project Managers.
[6] Sulaiman, T. & Smits, H. (2007), Measuring Integrated Progress on Agile Software Development Projects.
[7] Office of Management, Budget and Evaluation (Jun, 2003), Work Breakdown Structure, Department of Energy, U.S.
[8] Glen, M. C. (1995),A GUIDE TO NETWORK ANALYSIS.

Are We Secure In Our Software Life Cycle?

Software security is an oft-forgotten part of the software development life cycle and as a result it often gets left as an afterthought. To combat this, a penetrate-and-patch approach is used, where problems are patched when they occur in the live software and this is how security flaws in programs are fixed. However, this methodology is flawed, as it leads to more patches on released software, due to the security holes that could have been resolved earlier and for much less cost, if it was done before release. [1]

Solution?

Gilliam et al [2] propose a solution to this, arguing that security should be an integral part of development and integrated into the software life cycle.

They advocate using a Software Security Assessment Instrument (SSAI), which will be incorporated formally into the software life cycle, in an attempt to improve the overall security of the software. The SSAI is composed of a variety of components that catalogue, categorise and test what vulnerabilities / exposures exist in the software and picks out those that can be exploited.

Specifically, in this article [2], Gilliam talks about the Software security checklist (SSC), part of the SSAI, which aids organisations and system engineers in integrating security into their software life cycle and allocating their software a risk level rating, which will be useful for re-usable code. In addition, the SSC should also provide a security checklist for external release of the software.

Gilliam claims that in order to improve the security of software it will require “integrating security in the software life cycle… [as] an end-to-end process…” and this is something that I’m not in 100% agreement with. Using an SSAI and SSC for each stage of the development/maintenance life cycle is one which is too heavy on the developer and I believe that a less involved process should be used instead and this is based on certain beliefs and experiences I’ve had.

Experience

During the summer, I had an internship at a large financial institution, working on producing corporate applications for iPhones and iPads. Naturally, due to the nature of the content/information being handled, security was an important part of my team’s work.

However, the use of a security checklist as part of a larger SSAI, as suggested by Gilliam et al, was not the approach that was taken, at least, not completely.

Instead, developers were left to work on developing the functionality of apps that incorporated the in-house APIs, already developed, that were known to handle data securely. This saved an awful lot of time than if this were to be done for each separate app’s (or program’s) life cycle as suggested.

This approach is more efficient as it gives time for the developers to develop functionality, rather than worrying about checking off a security checklist. The accuracy of results from checklists are also doubtful, as items may be ticked without thorough investigation if deadlines were being rushed. This is even worse than having unsecure software, as management believes it’s secure!

Get ready to scrum

The rise of agile development practices has come about due to the realisation that the waterfall development model is fundamentally broken.[3] This means that the involved “end-to-end process” suggested by Gilliam would not be well suited to this current environment.

I experienced this first-hand during my job, as my team were developing in an agile-like development manner. I can’t see how such a security checklist, part of a SSAI, could fit into agile development style, except perhaps through consistent use of it on a daily/weekly basis.

If used in that way, I believe, it becomes a hindrance to development and will likely result in developers forgetting / not bothering to carry it out and leaving it till the end of development and then it’s not much better than the current approach of penetrate-and-patch.

Don’t worry, someone else will do it

Don’t get me wrong, I believe that software should definitely be tested for security before being released, however, I don’t think this should solely be the task of the developer but rather an external party.

This belief is founded upon my time at the company, where before the release of an app, an external party was brought in to test the code for security faults and vulnerabilities. They carried out an intensive week of software testing that, in my mind, is a much more viable way of validating security in programs. These teams were specialists on security vulnerabilities and much like the SSAI, have specific tools (test harnesses) that probed the software.

Feedback from the tests would be relayed to the development team and changes would be implemented in the program. If the software proved to be far too unsecure the external party would be brought in again to run tests after major changes had been made to the software.

If this had been done in-house, tools to realise the functionality of the SSAI would have to be brought in and run by developers of the software being tested. This approach would probably prove to be more costly in terms of price and hours, than if an external company had been brought in.

Don’t look at me, I’m the new guy!

If anyone who is part of the team on a temporary basis (contractors), they would need to be brought up to speed on a large amount of security procedure if it was heavily embedded into the software life cycle. This takes away valuable time that could be spent otherwise utilising the programmer’s capabilities.

I felt that during my job I didn’t need to worry about how I was coding in terms of security, which I would have had to if the SSC had been in place. I would be fearful that every line I wrote was incorrect as I hadn’t dealt with secure programming before, whereas, in reality, I was much more relaxed and able to program to the best of my ability.

Smashing stacks

This year I have been taking part in the secure programming course, which aims to encourage us to build new software securely by using techniques and tools to find and avoid flaws found in existing software. [4]

The way this is achieved, normally, is through common sense i.e. by not reproducing code that was found to be unsecure, rather than the formal approach described by Gilliam et al.

I think that this formal approach is perhaps an idealised attitude as to what should be happening and, in fact, for the majority of software life cycles, teams are more concerned with getting the bulk of the work done before focussing on how secure the product is.

But look, I’m secure!?

The security rating that could be provided by using an SSAI with an SSC could be very useful, as it would allow users of software to gauge how secure any data they input is and enable them to compare the security of similar products.

However, the consequences of this rating might not have the desired outcome. This is a similar problem to that which was seen before in the SAPM lectures [5], where companies would produce more features for their software in order to tick boxes, making it seem like they had the better product. However, in reality, the features weren’t desired by the users and only existed to make it appear like the software was better than its rivals, as it “ticked more boxes”.

Why does it matter?

But should we really care about software security in the software life cycle? I say yes, very much so.

As pointed out by Gilliam et al, several studies can be found showing that neglecting security in the software life cycle can have very negative impacts, both financially and in terms of image. [2]

They recommend that integrating security into the life cycle of the software can counteract this, but I disagree with them in terms of how much involvement it should have at each stage.

Their endorsement that this integration should be “end-to-end process” is one that is not carried out by an organisation who is heavily involved with secure programs, not best suited for the agile development style (that is rising in popularity) and is an out-dated and idealised view of how security can be integrated.

From my experience, I’ve decided that software security is best handled by external companies, who attack the software in order to identify weaknesses (ethical hacking). These can then be sealed / fixed with minimal effort (hopefully) and without the developers having to become experts at using security tools or looking for exploits.

In essence, leave security to the professionals.

References

[1] Gary McGraw & John Viega, ‘Introduction to Software Security: Penetrate and Patch Is Bad’, November 2001, http://www.informit.com/articles/article.aspx?p=23950&seqNum=7 [Accessed on: 4th February 2014].

[2] David P. Gilliam, Thomas L. Wolfe, Josef S. Sherif & Matt Bishop, “Software Security Checklist for the Software Life Cycle”, 2003, http://dl.acm.org/citation.cfm?id=939804 [Accessed on: 4th February 2014].

[3] Mark Baker, “Waterfall’s Demise and Agile’s Rise”, May 2012, http://www.modelmetrics.com/model-viewpoint/waterfalls-demise-agiles-rise/ [Accessed on: 5th February 2014].

[4] David Aspinall, “Secure Programming Lecture 1: Introduction”, January 2014, http://www.inf.ed.ac.uk/teaching/courses/sp/2013/lecs/intro.pdf [Accessed on: 6th February 2014].

[5] Allan Clark, “Introduction”, January 2014, http://www.inf.ed.ac.uk/teaching/courses/sapm/2013-2014/sapm-all.html#/15 [Accessed on: 6th February 2014].

Incentives Poison Extreme Programming Values

Agile methods such as extreme programming outline a set of principles and practices for developing software projects. They are ideal for small projects with volatile requirements as they favour flexibility and working software over rigorous planning and extensive documentation.

Reading over the list of rules for Extreme Programming, I couldn’t help but agree that many of the rules seemed like brilliant ways to ensure the quality and success of a project:

  • Having developers take collective ownership of the codebase
  • Integrating code with the main project build often
  • Writing unit tests before writing the code
  • Making frequent, small releases
  • Refactoring code whenever possible
  • …and many more.

What surprised me however was that on both the Extreme Programming site and the Agile Manifesto page, little is said about how to effectively get people to follow these principles, or systems which might come in conflict with these principles. It is as if the authors believe that if they just pick the right set of principles, then implementing them in any context will be trivial.donkeycarrotcliff

My argument is that even if you intend to adhere to extreme programming’s principles, incentive structures put in place by the organization in which you work can have a strong impact on whether you follow them or not. By relating my personal experience of such external incentives, I will hopefully convince you of just how powerful an influence they can have, and to be more aware of how incentives may be motivating your behaviour in the future.

As part of the software development course at the University of Edinburgh, I was placed in a team of 12 other students with the task of creating an autonomous, football-playing robot in a single semester. The robot would use images of the pitch received via bluetooth from a web camera in order to make tactical gameplay decisions. This was a project with vague requirements which would require lots of rapid prototyping to be successful, so it seemed like a perfect fit for the extreme programming methodology. We agreed to commit fully – Pair programming, Trello cards for user stories, stand up meetings, merciless re-factoring, shared code ownership. Everything

We soon found however that the assessment criteria would pressure us into abandoning many of these principles.

Incompatible, Mandatory Milestones

To ensure that teams were making tangible progress on their project as the semester progressed, the course organizers set fortnightly milestones on which a percentage of the teams total mark for the course was assessed. This seems like a good incentive to encourage students to work on the project, but two key features of the system led to negative behaviour:

  •  The milestones were set by the course organisers at the beginning of the semester  and didn’t change as the semester progressed.
  • Regardless of other functionality your team’s robot had, if it couldn’t pass the milestone, then the entire team got zero marks.

This created a conflict between following our Extreme Programming principles and achieving the highest course mark:

In principle we would like to write unit tests for all our code, refactor old modules and ensure overall system quality by following a coding standard.

In reality, if we can’t hack together some code to make our robot intercept an oncoming ball by Friday, we all lose 5% of our mark.

When pushed for time, we would always choose to hack together a façade of functionality to pass the current week’s milestone rather than follow solid development principles. We were rewarded for this behaviour with a higher course mark, but the code used for the milestone was often of such poor quality that it was of no use in the main project’s goal of building a quality football playing robot.

The conclusion we reached from this experience was that milestones are only useful if they correlate with the end goal of producing a quality final product. Being forced to meet arbitrary milestones often means developers have to choose between producing principled, quality code and meeting an arbitrary deadline.

Adding a competition element to frequent releases

Rather than fixed milestones, Extreme Programming advocates small, frequent releases which gradually display new functionality. Our robot football course actually had such a system in the form of fortnightly “friendly matches”.

Every two weeks, every team on the course would take part in a friendly tournament where 2 opposing teams would use the current version of their robot to play against each other in a game of one on one football. This ticks a lot of boxes in the extreme programming framework: It allows teams to demonstrate visible progress to their clients (the course markers) and gives teams a chance to get feedback on their systems by exposing its current design flaws.

Again however, an incentive was added which corrupted things a little. Instead of being consequence free matches, the position in which you finished in the previous fortnight’s tournament “seeded” you for the next tournament. In other words, if your robot was in the top 3, you would be allowed to “skip” the first round in the next tournament. Additionally, your robot’s position in the final tournament contributed a significant portion to your course mark, so being able to skip a round in this tournament was a massive reward.

To understand why this incentive is so insidious, it is important to recognise that the value of showcasing frequent iterations under the extreme programming methodology is to expose flaws in your system and get the necessary feedback to improve them. This incentive warps this goal so that the value chiefly lies in beating as many teams as possible. Teams would often accomplish this by explicitly coding gameplay strategies which they knew the other teams couldn’t handle at their current stage of development instead of working on the problems in their final overall system. In this environment, it is optimal to hide your system’s flaws rather than expose them.

Ranking Group Participation

Even without external influence, enforcing extreme programming principles such as pair programming and shared code ownership requires willpower. In a disciplined, trusting environment, it can certainly be done, but in an environment which forces you to consider your participation in the project as a competition against others in your team, it is nearly impossible.

In order to encourage participation from every member of a team , each team was subject to a weekly performance review, where students were given a score to grade their level of contribution and made up a small percentage of their final course mark . A score of 5 meant you had contributed exceptionally, while a 1 meant your contribution was minimal.  The key detail of this incentive system was that not everyone in the team was allowed to receive a high mark . Students were ranked from those who contributed most to those who contributed least and assigned a score respectively.

stackRankingThis was perhaps the most poisonous incentive of all, as it encouraged us to think of our teammates not as collaborators, but as competitors. As a result, it made numerous extreme programming principles difficult to uphold:

  • Collective ownership of code – “Sharing code” now means others may be able to take credit for part of your contribution . The optimal strategy to get a high rank is to take sole ownership of a module in order to say in the meeting that you were “entirely responsible for progress on the planning system this week”.
  • Code the unit tests first / Refactor whenever possible – Again, you are forced to consider what sounds better in a meeting:  “I added functionality so that the robot can now kick while moving” or “I refactored some code which was already working so that it has a clearer structure”.
  • Integrate Often – It now becomes advantageous to sometimes not integrate your work with the current build. This way, you can create the impression that you have made a large amount of progress by explaining all the new code you have just “not yet pushed to the main build”. If you were to continually integrate your work, there would be clear evidence of what you had and had not done.

Communication and trust are vital to upholding many agile practices, so incentives such as this which create an environment of distrust will suffocate such practices.

This system of “Stack Ranking” developers has become notorious in industry because of its use by many high profile companies. Many reports claim that such a system was responsible for the majority of problems at Microsoft over the last decade.

Conclusion

While agreeing to commit to an agile process such as Extreme Programming (or indeed any process) is a positive step towards a successful project,  I have shown through relaying my own experiences that the mere choice of principle is not the only factor at play. Teams must be disciplined in adhering to their chosen principles and diligent in identifying structures which pressure them to do otherwise.

I hope I have emphasised just how powerful and poisonous such influences can be, and have given enough examples to encourage you to look out for similar systems which may influence you in the future.

This is a response article to the article “(Re)Estimation: A brave man’s task January 27, 2014 Uncategorized Panos Stratis ” done by Eshwar Ariyanathan S1340691

I am including my agreeing and disagreeing points
for the article. “(Re)Estimation: A brave man’s task January 27, 2014 Uncategorized Panos Stratis ”
Points of Agreement:
*I agree with the fact that a project manager has to take
decisions that abides by companies policy and upper
management.
* I agree with the fact that adding new programmers
later on in the process of software lifecycle increases the
complexity and timing.
* I agree with the fact that the project managers should
be realistic in estimating the timeframe for the work to be
done and meeting deadlines.
*I agree with the fact that the project manager plays a
very crucial role in resource allocation, time frame and
completion of the project.
Points of Disagreement:
*I disagree with the fact that if more programmers
are involved in a task total time increases
exponentially and they don’t always have common
point. My opinion on this statement is that on
involving more programmers on a task the work
gets split and its easy to finish tasks on time. New
ideas can emerge from any programmer in the team
which might help in finishing the tasks quickly.
For eg consider that an application has 100
functions and we have 10 programmers. Each
programmer has to do only 10 functions. Work gets
split and happens quickly.
* If programmers are working towards a common
goal then any argument being raised will only be for
the betterment of the project and not creating
ambiguity. So arguments raised by the
programmers during the project discussion should
be viewed as a positive strategy and not as time
wasting process.

The Anaemic Domain Model is no Anti-Pattern, it’s a SOLID design

Design Patterns, Anti-Patterns and the Anaemic Domain Model

In the context of Object-Oriented software engineering, a “design pattern” describes a frequently recurring and effective solution to a commonly encountered problem. The utility of formalising and sharing design patterns is to provide a set of “battle-tested” designs as idiomatic solutions for classes of problems, and to increase the shared vocabulary among software engineers working on collaboratively developed software. The term was coined in the seminal book by Gamma et al [5], which named and described a set of common design patterns. The lexicon of design patterns grew from the initial set specified in the book, as the notion gained in popularity [6], [17].

Following the increasing popularity of design patterns as a concept, the idea of “design anti-patterns” entered popular discourse [7][8]. As implied by the name, an anti-pattern is the opposite of a pattern; while it too describes a recurring solution to a commonly encountered problem, the solution is typically dysfunctional or ineffective, and has negative impacts on the “health” of the software (in terms of maintainability, extensibility, robustness, etc.). Anti-patterns serve a similar purpose to patterns; the description of the anti-pattern might illustrate a typical implementation of the anti-pattern, explain the context it generally occurs in, and show how the implementation results in problems for the software.

A potential problem with the concept of a design anti-pattern is that it might discourage critical thought about the applicability of the pattern. A design that may be inappropriate in some contexts may be a sensible decision in others; a solution might be discarded after being recognised as an anti-pattern, even though it would be a good fit for the problem at hand.

I contend that such an anti-pattern is the Anaemic Domain Model (ADM), described by Martin Fowler [1] and Eric Evans [2]. The ADM is considered by these authors as a failure to model a solution in an Object-Oriented manner, instead relying on a procedural design to express business logic. This approach is contrasted with the Rich Domain Model (RDM) [1], [20], in which classes representing domain entities encapsulate all business logic and data. While the ADM may certainly be a poor design choice in some systems, it is not obvious that this is the case for all systems. In this blog post I will consider the arguments against the ADM, and contend that in some scenarios, the ADM appears be an reasonable choice of design, in terms of adherence to the SOLID principles of Object-Oriented design, established by Robert Martin [3], [4]. The SOLID principles are guidelines which seek to balance implementation simplicity, scalability, and robustness. Specifically, by contrasting an ADM design with an RDM design for a hypothetical problem, I will attempt to show that ADM is a better fit for the SOLID principles than the RDM solution. By doing so, I hope to demonstrate a contradiction in the received wisdom with regard to this anti-pattern, and consequently suggest that implementing an ADM is a viable architectural decision.

Why is the Anaemic Domain model considered by some to be an Anti-Pattern?

Fowler [1] and Evans [2] describe an ADM as consisting of a set of  behaviour-free classes containing business data required to model the domain. These classes typically contain little or no validation of the data as conforming to business rules; instead, business logic is implemented by a domain service layer. The domain service layer consists of a set of types and functions which process the domain models as dictated by business rules. The argument against this approach is that the data and methods are divorced, violating a fundamental principle of Object-Oriented design by removing the capability of the domain model to enforce its own invariants. In contrast, while an RDM consists of the same set of types containing necessary business data, the domain logic is also entirely resident on these domain entities, expressed as methods. The RDM then aligns well with the related concepts of encapsulation and information hiding; as Michael L. Scott states in [9], “Encapsulation mechanisms enable the programmer to group data and the subroutines that operate on them together in one place, and to hide irrelevant details from the users of an abstraction”.

In an RDM, the domain service layer is either extremely thin or non-existent [20], and all domain rules are implemented via domain models. The contention is that domain entities in a RDM are then entirely capable of enforcing their invariants, and therefore the system is sound from an Object-Oriented design perspective.

However, the capability of a domain entity to enforce local data constraints is only a single property in a set of desirable qualities in a system; while the ADM sacrifices this ability at the granularity of the individual domain entities, it does so in exchange for greater potential flexibility and maintainability of the overall implementation by allowing the domain logic to be implemented in dedicated classes (and exposed via interfaces). These benefits are particularly significant in statically typed languages such as Java and C# (where class behaviour cannot simply be modified at run-time) for improving the testability of the system by introducing “seams” [10], [11] to remove inappropriate coupling.

A Simple Example

Consider the back end of an e-commerce website in which a customer may purchase items, and offer items for sale to other customers across the globe. Purchasing an item reduces the purchaser’s funds. Consider the implementation of how a customer places a purchase order for an item. The domain rules state that the customer can only place an order if they have enough funds, and the item must be available in region for that customer. In an RDM, a Customer class would represent the domain entity for the customer; it would encapsulate all the attributes for the customer, and present a method such as PurchaseItem(Item item). Like Customer, Item and Order are domain models representing purchasable items and customer orders for items respectively. The implementation of the Customer (in pseudo-C#) might be something like;

/*** BEGIN RDM CODE ***/

class Customer : DomainEntity // Base class providing CRUD operations
{
    // Private data declared here

    public bool IsItemPurchasable(Item item) 
    {
        bool shippable = item.ShipsToRegion(this.Region);
        return this.Funds >= item.Cost && shippable;
    }

    public void PurchaseItem(Item item)
    {
        if(IsItemPurchasable(item))
        {
            Order order = new Order(this, item);
            order.Update();
            this.Funds -= item.Cost;
            this.Update();
        }
    }
}

/*** END RDM CODE ***/

The domain entities here implement the Active Record pattern [17], exposing Create/Read/Update/Delete methods (from a framework/base class) to modify records in the persistence layer (e.g., a database). It can be assumed that the PurchaseItem method is invoked in the context of some externally managed persistence layer transaction (perhaps initiated by the HTTP request handler/controller, which has extracted a Customer and an Item from the request data). The role of the Customer domain entity in this RDM is then to model the business data, implement the business logic operating on that data, construct Order objects for purchases, and interface with the persistence layer via the Active Record methods; the model is Croesus-like in its richness, even in this trivial use case.

The following example demonstrates how the same functionality might be expressed using an ADM, in the same hypothetical context;

/*** BEGIN ADM CODE ***/

class Customer { /* Some public properties */ }
class Item { /* Some public properties */ }

class IsItemPurchasableService : IIsItemPurchasableService
{
    IItemShippingRegionService shipsToRegionService;

    public bool IsItemPurchasable(Customer customer, Item item)
    {
        bool shippable = shipsToRegionService.ShipsToRegion(item);
        return customer.Funds >= item.Cost && shippable;
    }
}

class PurchaseService : IPurchaseService
{
    ICustomerRepository customers;
    IOrderFactory orderFactory;
    IOrderRepository orders;
    IIsItemPurchasableService isItemPurchasableService;

    // constructor initialises references

    public void PurchaseItem(Customer customer, Item item)
    {
        if(isItemPurchasableService.IsItemPurchasable(customer, item))
        {
            Order order = orderFactory.CreateOrder(customer, item);
            orders.Insert(order);
            customer.Balance -= item.Cost;
            customers.Update(customer);
        }
    }
}

/*** END ADM CODE ***/

Contrasting the example with respect to the SOLID Principles

At first glance, the ADM is arguably worse than the RDM; there are more classes involved, and the logic is spread out over two domain services (IPurchaseService and IItemPurchasableService) and a set of application services (IOrderFactory, ICustomerRepository and IOrderRepository) rather than resident in the domain model. The domain model classes no longer have behaviour, but instead just model the business data and allow unconstrained mutation (and therefore lose the ability to enforce their invariants!). Given these apparent weaknesses, how can this architecture possibly be better than the altogether more Object-Orientation compliant RDM?

The reason that the Anaemic Domain Model is the superior choice for this use case follows from consideration of the SOLID principles, and their application to both of the architectures [12]. The ‘S’ refers to the Single Responsibility Principle [13], which suggests that a class should do one thing, and do it well, i.e., a class should implement a single abstraction. The ‘O’ refers to the Open/Closed Principle [14], a similar but subtly different notion that a class should be “open for extension, but closed for modification”; this means that, in so far as possible, classes should be written such that their implementation will not have to change, and that the impact of changes is minimised.

Superficially, the Customer class in the RDM appears to represent the single abstraction of a customer in the domain, but in reality this class is responsible for many things. The customer class models the business data and the business logic as a single abstraction, even though the logic tends to change with higher frequency that the data. The customer also constructs and initialises Order objects as a purchase is made, and contains the domain logic to determine if a customer can make an order. By providing CRUD operations through a base class, the customer domain entity is also bound to the persistence model supported by this base implementation. By enumerating these responsibilities it is clear that even in this trivial example, the RDM Customer entity exhibits a poor separation of concerns.

The ADM, on the other hand, decomposes responsibilities such that each component presents a single abstraction. The domain data is modelled in “plain-old” language data structures [18], while the domain rules and infrastructural concerns (such as persistence and object construction) are encapsulated in their own services (and presented via abstract interfaces). As a consequence, coupling is reduced.

Contrasting the flexibility of the RDM and ADM architectures

Consider scenarios in which the RDM Customer class would have to be modified; a new field might be introduced (or the type of an existing field may need changed), or the Order constructor may require an additional argument, or the domain logic for purchasing an item may become more complex, or an alternative underlying persistence mechanism might be required which is unsupported by the hypothetical DomainEntity base class.

Alternatively, consider scenarios in which the ADM types must change. The domain entities which are responsible for modelling the business data will only need modified in response to a requirements change for the business data. If the domain rules determining if an item is purchasable become more complex (e.g., an item is specified to only be sold to a customer above a certain “trust rating” threshold), only the implementation of IsItemPurchasableService must change, while in the RDM the Customer class would require changing to reflect this complexity. Should the ADM persistence requirements change, different implementations of the repository [17], [19] interfaces can be provided to the PurchaseService by the higher-level application services without requiring any changes whereas in the RDM, a base class change would impact all derived domain entities. Should the Order constructor require another argument, the IOrderFactory [5] implementation may be able to accommodate this change without any impact on the PurchaseService. In the ADM each class has a single responsibility and will only require modification if the specific domain rules (or infrastructural requirements) which concern the class are changed.

Now consider a new business requirement was added to support refunds for purchases with which a customer is unsatisfied. In the RDM, this might be implemented by adding a RefundItem method to the Customer domain entity, given the simplistic argument that domain logic related to the Customer belongs as a member function of the Customer domain entity. However, refunds are largely unrelated to purchases, for which the Customer domain entity is already responsible, further mixing the concerns of this type. It can be observed that in an RDM, domain entity classes can accumulate loosely related business logic, and grow in complexity. In an ADM, the refund mechanism could be implemented by introducing a RefundService class, solely concerned with the domain logic for processing refunds. This class can depend on the narrow set of abstractions (i.e., interfaces of other domain and infrastructural services) required to implement its single concern. The new RefundService can be invoked at high level (in response to some refund request), and this new domain behaviour has been implemented without impacting any of the existing functionality.

In the example, the ADM solves the problem of bundling unrelated concerns into the same module identified in the RDM by taking advantage of the ‘I’ and ‘D’ in SOLID, namely the Interface Segregation Principle [15] and the Dependency Inversion Principle [16]. These principles state that an interface should present a cohesive set of methods, and that these interfaces should be used to compose the application (i.e., the domain service layer in the ADM). The interface segregation principle tends to result in small narrowly focussed interfaces such as our IItemShippingRegionService and IIsItemPurchasableService, as well as abstract repository interfaces; the dependency inversion principle compels us to depend on these interfaces, to decouple the implementation of a service from the details of the implementation of another.

The Anaemic Domain Model better supports Automated Testing

As well as more flexible and malleable application composition, adoption of these principles allows the ADM to extract the indirect benefits over RDM of simpler automated testing; this is because highly cohesive, loosely coupled components which communicate via abstract interfaces and are composed via dependency injection allow for trivial mocking of dependencies. This means that in the ADM it is simple to construct a scenario in an automated test which might be more complicated to construct in an RDM, so the maintainability of the automated tests is improved; the effect of this is that automated testing has a lower cost, so developers will be more inclined to create and maintain tests. To illustrate this, consider the example above, such that unit tests are to be written for the IsItemPurchasable.

The (current) domain rules for an item being purchasable are that the customer has sufficient funds, and is in a region that the item ships to. Consider writing a test that checks that when a customer has sufficient funds but is not in a shipping region for the item, the item is not purchasable. In the RDM this test might be written by constructing a Customer and an Item, configuring the customer to have more funds than the item costs, and configuring the customer region to be outside the regions the item ships to, and asserting that the return value of customer.IsItemPurchasable(item) is false. However, the IsItemPurchasable method depends on the implementation details of the ShipsToRegion method of the Item domain entity. A change to the domain logic in Item might change the result of the test. This is undesirable, as the test should be exclusively testing the logic of the customer’s IsItemPurchasable method; a separate test should cover the specifics of the item’s ShipsToRegion method. As domain logic is expressed in the domain entity, and the concrete domain entity exposes the interface to the domain logic, implementations are tightly coupled such that the effects of changes cascade, which in turn makes automated tests brittle.

The ADM, on the other hand, expresses the IsItemPurchasable domain logic on a dedicated service, which depends on an abstract interface (the ShipsToRegion method of IItemShippingRegionService). A stubbed, mock implementation of IItemShippingRegionService can be provided for this test, which simply always returns false in the ShipsToRegion method. By decoupling the implementations of the domain logic, each module is isolated from the others and is insulated from changes in the implementation of other modules. The practical benefits of this are that a logic change will likely only result in the breakage of tests which were explicitly asserting on the behaviour which has changed, which can be used to validate expectations about the code.

Refactoring the RDM to apply SOLID tends to result in an ADM

A proponent of the RDM architecture might claim that the hypothetical example provided is not representative of an true RDM. It might be suggested that a well implemented Rich Domain Model would not mix persistence concerns with the domain entity, instead using Data Transfer Objects (DTO’s) [18, 17] to interface with the persistence layer. The inclusion of directly invoking the Order constructor might be viewed as constructing a straw man to attack; of course no domain model implementation would bind itself directly to the constructor of another object, using a factory is just common sense [5]! However, this appears to be an argument for applying the SOLID principles to the application level infrastructural services, and disregarding the SOLID principles for domain design. As the hypothetical RDM is refactored to apply the SOLID principles, more granular domain entities could be broken out; the Customer domain entity might be split into CustomerPurchase and CustomerRefund domain models. However, these new domain models may still depend on atomic domain rules which may change independently without otherwise affecting the domain entity, and might be depended on by multiple domain entities; to avoid duplication and coupling, these domain rules could then be further factored out into their own modules and accessed via an abstract interface. The result is that as the hypothetical RDM is refactored to apply the SOLID principles, the architecture tends towards the ADM!

Conclusion

By exploring the implementation of a straightforward example, we have observed that an Anaemic Domain Model better adheres to the SOLID principles than a Rich Domain Model. The benefits of adherence to the SOLID principles in the context of domain design were considered, in terms of both loose coupling and high cohesion and resulting increased flexibility of the architecture; evidence of this flexibility was that testability was improved by being able to trivially provide stubbed test implementations of dependencies. By considering a how the benefits of adherence to the SOLID principles might be gained in the RDM, the refactoring tended to result in an architecture resembling an ADM. If adherence to the SOLID principles is a property of well engineered Object-Oriented programs, and an ADM adheres better to these principles than an RDM, the ADM cannot be an anti-pattern, and should be considered a viable choice of architecture for domain modelling.

References

[1] Fowler, Martin. Anaemic Domain Model. http://www.martinfowler.com/bliki/AnemicDomainModel.html, 2003.

[2] Evans, Eric. Domain-driven design: tackling complexity in the heart of software. Addison-Wesley Professional, 2004.

[3] Martin, Robert C. The Principles of Object-Oriented Design. http://butunclebob.com/ArticleS.UncleBob.PrinciplesOfOod, 2005.

[4] Martin, Robert C. Design principles and design patterns. Object Mentor, 2000: 1-34.

[5] Erich, Gamma, et al. Design patterns: elements of reusable object-oriented software. Addison Wesley Publishing Company, 1994.

[6] Wolfgang, Pree. Design patterns for object-oriented software development. Addison-Wesley, 1994.

[7] Rising, Linda. The patterns handbook: techniques, strategies, and applications. Vol. 13. Cambridge University Press, 1998.

[8] Budgen, David. Software design. Pearson Education, 2003.

[9] Scott, Michael L. Programming language pragmatics. Morgan Kaufmann, 2000.

[10] Hevery, Miško. Writing Testable Code. http://googletesting.blogspot.co.uk/2008/08/by-miko-hevery-so-you-decided-to.html, Google Testing Blog, 2008.

[11] Osherove, Roy. The Art of Unit Testing: With Examples in. Net. Manning Publications Co., 2009.

[12] Martin, Robert C. Agile software development: principles, patterns, and practices. Prentice Hall PTR, 2003.

[13] Martin, Robert C. SRP: The Single Responsibility Principle. http://www.objectmentor.com/resources/articles/srp.pdf, Object Mentor, 1996.

[14] Martin, Robert C. The Open-Closed Principle. http://www.objectmentor.com/resources/articles/ocp.pdf, Object Mentor, 1996.

[15] Martin, Robert C. The Interface Segregation Principle. http://www.objectmentor.com/resources/articles/isp.pdf, Object Mentor, 1996.

[16] Martin, Robert C. The Dependency Inversion Principle, http://www.objectmentor.com/resources/articles/dip.pdf, Object Mentor, 1996.

[17] Fowler, Martin. Patterns of enterprise application architecture. Addison-Wesley Longman Publishing Co., Inc., 2002.

[18] Fowler, Martin. Data Transfer Object. http://martinfowler.com/eaaCatalog/dataTransferObject.html, Martin Fowler site, 2002.

[19] Fowler, Martin. Repository. http://martinfowler.com/eaaCatalog/repository.html, Martin Fowler site, 2002.

[20] Fowler, Martin. Domain Model. http://martinfowler.com/eaaCatalog/domainModel.html, Martin Fowler site, 2002.

Choosing the right tools for the job.

As a result of software development being a fairly new undertaking, no one methodology has yet been able to claim the throne and be the one methodology to rule them all. In fact, there are many methodologies in use and new ones appearing frequently enough to cause shifts in the ways we develop software. One can at least discern ’categories’ in which these methodologies belong, agile vs. heavyweight [1], each of which has their distinctive characteristics.

Before presenting an overview of these characteristics, it needs to be stated that this post assumes that a methodology is needed for any kind of large-scale software project. That being said, it seems as if choosing a methodology is as much a stage in the development process as the development of the software itself, but more on that in a while.

The key characteristics 

There are many agile methods to choose from, such as scrum [2] and extreme programming [3]. Boehm [4] points out some common characteristics among them all. He notes the idea that the lack of documentation is made up for by implicit developer knowledge and that, therefore, agile methods may require a group of more experienced developers. Instead of adhering to a set of processes and tools, agile methods emphasize the individuals working in a project, as well the project itself being amenable to changes during its duration. The last characteristic explains the name, as there is inherent agility in agile methods.

Heavyweight methodologies on the other hand, places emphasis on documentation, standardized processes and capturing the requirements correctly from the get go. This is often done through the use of modeling languages such as UML. Boehm highlights the fact that such projects emphasize efficiency, predictability and a process with a clear goal, in which one matures a software product for its release [4]. Instead of agility, developers get the discipline inherent in such standardizations.

So, which methodology should we opt for? 

Supporters at both ends of the spectrum like to point out the flaws of the other methodology. However, Boehm suggests that one should try to balance agility with discipline, especially in sectors with companies needing both ”rapid value” and ”high assurance” [4]. This statement highlights what should be a top priority for developers, namely, what does the customer need?

Heavyweight methodologies are suitable for when companies desire a low risk and when rapid development is not the first priority. This could for instance be applicable to ATM’s, power plant controllers, ERP systems or other, similar, large-scale enterprise software. The constraints are, in most cases, clearer, leading to less ambiguity in the requirements. In the first two examples, the constraints can even outline some of the requirements.

Another factor can be the nature of the development within a certain sector, where a specific system can function for many years and therefore might not need rapid and continuous development, as opposed to say, web development. In web development, new technologies and development trends makes rapid prototyping and development necessary, since the changes are frequent. The factors, both internal and external, in the large scale examples mentioned affecting a given project are conceived to be rarer than in settings that are “plagued” by change.

Agile methods on the other hand can work in those change plagued environments, where high risks are accepted, and where developers can work with cutting-edge technologies for rapid prototyping, such as rails, node.js etc. for the development of commercial apps. Therefore, it might be more suitable for ”startups” and small companies with fast development cycles. Requirements can be vague or even unknown to a lesser extent and then discovered during development. In such environments small teams can be more productive not dealing with bureaucracy and adhering to standardized processes.

An example could be a new technology allowing for a new type of product. At this stage, or in fact at any stage, the priority is not to perfect a product, but rather to rapidly deploy a functioning product and then re-iterate within a continuous development and deployment cycle, enabling a rapid growth of the company in question [5].

Concluding remarks

In essence, I have argued that different methodologies have their time and place and that one should select the right tools for the job, the right methodology for the setting. Boehm may well be right in saying that one can even combine the two approaches in some settings. In any case, one needs to analyze the context; what is being developed and for whom? The choices developers make are not made in a vacuum and thus have to be made with the context taken into consideration.

Developers should be comfortable using different approaches to different projects, and this is true not only for methodologies, but also for programming languages and API’s. In that sense, with regard to the comment made in the beginning of this post, the choice of methods and tools for a projects is as much a part of the project as the actual development.

There is a need for pragmatism and objectivity, and it is therefore easier said than done, since many will presumably fall back to what they are comfortable with, even if another method is better suited for the task. Therein lies the danger, in the zealotry of developers, regardless of which methodology one happens to favor.

So how does one know which methods and tools are the right ones? Well, as in any project, some estimations are necessary. If one is to trust thoughts presented here on the matter, one could make a choice based on an analysis of the context of the project, e.g. constraints, requirements, risk of potential changes during the course of the project and so on. This of course introduces further problems of the difficulties of estimating risk for example, however, thoughts on that is best left to a post of its own.

References

[1] A. Clark. ”Software Development Methodologies”. Lecture slides, University of Edinburgh. Feb 2014 [Online]. Available: http://www.inf.ed.ac.uk/teaching/courses/sapm/2013-2014/sapm-all.html#/Methodologies_Lecture_Start

[2] K. Schwaber, J.Sutherland. ”The Scrum Guide”. Scrum.org. Jul 2013 [Online]. Available: https://www.scrum.org/Portals/0/Documents/Scrum%20Guides/2013/Scrum-Guide.pdf

[3] T. Parr. ”Object-Oriented Software Development”. Lecture notes, University of San Francisco. Jan 2009 [Online]. Available: http://www.cs.usfca.edu/~parrt/course/601/lectures/xp.html

[4] B. Boehm, ”Get Ready for Agile Methods, with Care”, IEEE Computer, vol. 35, no. 1, pp. 64-69, Jan 2002

[5] P. Graham. “Startup = Growth”. paulgraham.com. Sep 2012 [Online]. Available: http://paulgraham.com/growth.html

Continuous Delivery: An Easy Must-Have for Agile Development

Introduction

Everybody working in software development has heard about it when talking about software quality assurance: Terms that begin with “Continuous” and end with “Integration”, “Build”, “Testing”, “Delivery”, “Inspection”, just to name a few examples. The differences of these terms are sometimes hard to tell and the meanings vary, depending on who uses them. In this post, the easy implementation of Continuous Delivery is discussed.

For clarification, Continuous Delivery is defined as described by Humble and Farley in their book “Continuous Delivery”[1]. In this highly recommendable book, a variety of techniques (including all other terms mentioned in the previous paragraph) to continuously assure software quality are described.[1] Adapting these techniques does not require much effort nor experience and should be done in every software project. Especially in large-scale software projects, this technique helps to maintain high software quality.

Errors First Discovered by the Customer

In a software project with a lot of engineers working on the same code base, unexpected side effects of source code changes are very likely to result in erroneous software. If there are automated unit tests, most of these errors are detected automatically. However, unfortunately there are some unexpected run time side effects that only occur when the software is running on a particular operating system. In a normal development process, such errors are detected at the worst point possible: when the customer deploys or uses the software. This results in high expenses for fixing the issue urgently.

In order to prevent those kinds of errors, Continuous Delivery has developed. As Carl Caum from PuppetLabs describes it in a nutshell, Continuous Delivery does not mean that a software product is deployed continuously, but that it is proven to be ready for deployment at any time. [2] As described in [3], an article by Humble and Molesky, Continuous Delivery introduces automated deployment tests for achieving this goal of deployment-readiness at any time. [3] This post focuses on those deployment tests as it is the core of Continuous Delivery.

Implementing and Automating Continuous Delivery

To prove if software is working in production, it needs to be deployed on a test system. This section explains how to implement such automatic deployment tests.

Firstly, the introduction of a so-called DevOps culture is useful. This means a closer collaboration of between software developers and operation staff.[3] Each developer should understand the basic operation tasks and vice versa, in order to build up sophisticated deployments. Even though [3] describes this step as necessary, from my point of view such a culture can be advantageous for Continuous Delivery but is not mandatory for succeeding. It is not mandatory, because automated deployment tests can be developed without the help of operations, although it is certainly more difficult. More detailed information about DevOps can for example be found in the book “DevOps for Developers” by Michael Hüttermann [4].

Secondly, as explained in a blog post by Martin Fowler, [5], it is crucial to automate everything within the process of delivering software. [5] The following example shows a simplified ideal Continuous Delivery process:

  1. Developer John modifies product source code
  2. Test deployment is triggered automatically due to a change in the version control system
  3. Deployment is tested automatically, giving e-mail feedback to John that his source code breaks something in production
  4. John realizes he forgot to check in one file and fixes the error promptly
  5. Steps 2 and 3 repeat, this time John does not receive an email as the deployment tests do not find misbehaviour of the product.

For example, such a process can be automated completely easily with the software Jenkins[6] and its Deployment Pipeline Plugin. Detailed instructions for such a setup can be found in the blog post [7].

However, such a continuous process is not a replacement for other testing (Unit Testing etc.) but an addition to it. It is an additional layer of software quality assurance.

Steven Smith states in his blog post [8] that Continuous Delivery in an organisation requires radical organisational changes and is therefore difficult to introduce to a company. I disagree with that partly because it depends on the type of the specific company. If a company uses old fashioned waterfall-like development methods, Smith is right with that point. However, when concerning an agile developing software company, Continuous Delivery is nothing more than more automated testing. It does not require people changing their habits in this case, as the developers are used to Continuous Testing methods. The only additional work is to maintain deployment scripts and to write deployment specific tests.

Configuration Management Systems and Scripting

In order to perform deployment tests, scripts are needed for the automation. These scripts can be written in any scripting language, for example in Bash (shell-scripts). However, there are more sophisticated approaches using so-called Configuration Management Systems such as Puppet[9] or Chef[10]. According to Adam Jacob’s contribution to the book “Web Operations”, section “Infrastructure as Code”[11], the use of a Configuration Management System’s scripting language leads to the following advantages:

Firstly, such deployment scripts are declarative. That means that the programmer only describes what the system should look like after executing the script, without the need of describing how it should be done in detail. Secondly, the scripts are idempotent, so they only apply the modifications to the system that are necessary. Furthermore, executions of the same script on the same host always lead to the same state, regardless how often a script is executed. [11]

For these reasons, Configuration Management System’s scripting opportunities are superior to bash scripting. Furthermore, they provide a better readability, maintainability and a lower complexity of the scripts compared to similar Bash-scripts.

Conclusion

According to my software business experience, it is easy to implement Continuous Delivery step by step into an agile thinking company. The main things to focus on are the following: Firstly, such an implementation should be fully automated and integrated with the version control system. Secondly, a Configuration Management System is highly recommendable because of easier deployment scripting. Furthermore, such scripts provide better maintainability, which saves resources.

The goals achieved by the implementation of Continuous Delivery are twofold: Firstly, the release process is optimised, leading to the possibility to release almost automatically. Secondly, developers get immediate feedback when the source code does not work in a production-like environment.

In conclusion, Continuous Delivery thereby leads to crucially better software and can be introduced into an agile operating company without much effort.

References

[1] J. Humble and D. Farley, Continuous Delivery: Reliable Software Releases through Build, Test, and Deployment Automation, Pearson Education, 2010.
[2] C. Caum, “Continuous Delivery Vs. Continuous Deployment: What’s the Diff?,” 2013. [Online]. Available: http://puppetlabs.com/blog/continuous-delivery-vs-continuous-deployment-whats-diff [Accessed 2/2/2014].
[3] J. Humble and J. Molesky, “Why Enterprises Must Adopt Devops to Enable Continuous Delivery,” Cutter IT Journal, vol. 24, no. 8, p. 6, 2011.
[4] M. Hüttermann, DevOps for Developers, Apress, 2012.
[5] M. Fowler, “Continuous Delivery,” 2013. [Online]. Available: http://martinfowler.com/bliki/ContinuousDelivery.html [Accessed 2/2/2014].
[6] “Jenkins CI,” 2014. [Online]. Available: http://jenkins-ci.org/ [Accessed 2/2/2014].
[7] “Continuous Delivery Part 2: Implementing a Deployment Pipeline with Jenkins « Agitech Limited,” 2013. [Online]. Available: http://www.agitech.co.uk/implementing-a-deployment-pipeline-with-jenkins/ [Accessed 2/2/2014].
[8] S. Smith, “Always Agile · Build Continuous Delivery In,” 2013. [Online]. Available: http://www.stephen-smith.co.uk/build-continuous-delivery-in/ [Accessed 3/2/2014].
[9] “What is Puppet? | Puppet Labs,” 2014. [Online]. Available: http://puppetlabs.com/puppet/what-is-puppet [Accessed 2/2/2014].
[10] “Chef,” 2014. [Online]. Available: http://www.getchef.com/chef/ [Accessed 2/2/2014].
[11] A. Jacob, “Infrastructure as Code,” in Web Operations: Keeping the Data On Time, O’Reilly Media, 2010.

The Role of the Tester’s Knowledge in Exploratory Software Testing, done by s1340691 Eshwar Ariyanathan

Key Terms:

Exploratory testing(ET): It is an approach to software testing which involves learning of test design and test execution simultaneously.

Grounded Theory: It is a method by which a new discovery takes place by means of analysis of data.

Verification & Validation:

Validation means checking if the product meets the customers needs and verification means that whether or not the product compiles with regulation.

Test Oracle: is a method used to make a distinction between correct and wrong results during the process of software tesing.

Paper: Juha Itkonen, Mika V. Mantyla, Casper

Lassenius, “The Role of the Tester’s Knowledge in

Exploratory Software Testing “

IEEE Transactions on Software Engineering 11 Sept. 2012 . IEEE Computer Society PREVIOUS WORK:

The previous work shows that Exploratory Testing(ET) is being widely used by the software industry and there is a growing evidence that the industry testers see value in it. This

increasing interest in the software industry among the testers, in search for Exploratory testing, paves way for research questions in this domain.

RESEARCH QUESTIONS:

There is no clarity in the way how Exploratory testing works and why it is being used?

What types of knowledge are being used in exploratory testing?

How do the testers apply the knowledge for testing purposes?

What are the possible failures that are being detected by exploratory testing?

RESEARCH WORK:

A study was conducted under industrial set up where video was recorded for 12 testing sessions. In this approach the participants of testing (testers) were allowed to think while they were performing functional testing. The researcher occasionally asked for clarification about the testing process from the testers. After each session of testing process there were 30 minutes of interview session to discuss the results.

Grounded theory was being applied to find out what the testers thought and what type of knowledge was being utilised.

The way in which the testers found out the failures or errors by using their personal knowledge and without writing the test case descriptions is being discussed.

The knowledge that was being used in the process was classified as domain knowledge, system knowledge and general software engineering knowledge. It was found that the testers were using their knowledge as a test oracle to verify the correctness of the result and the knowledge was used as a guide in selecting the objects for test design

It was found that a large number of failures called windfall failures were found outside the focus area of testing and it was found using exploratory investigation.

The conclusion from the paper is that the approach used by exploratory testers clearly differs from the way test case based paradigm works.

RESULTS:

A number of results have been found out from this set of experiments which are as follows.

* The testers were found to be spotting errors in the code based on their personal experience and knowledge without writing test case descriptions.

*Personal knowledge was equated to be the combination of system knowledge, domain knowledge and general software engineering knowledge.

*From the experiments performed it was found that the testers applied their knowledge.

*The failures that were found in the test process were mostly found incidentally. (i.e most errors were found outside of the system that was being checked upon).

*Failures were being classified on the inputs or conditions that interact with that failure.

*Failures related to domain knowledge were based straightforward.

*Failures related to system are related to software engineering knowledge and are difficult to provoke.

POINTS OF AGREEMENT:

 I agree with the fact that this research work has taken a considerable amount of time to work upon with industrial collaboration and only experts have been employed with the research. So i agree with the results.

 The research has used a lot of experiments(12) before jumping into conclusions. So i agree with the results produced .

 20% of the failures found were windfall failures i.e these were found incidentally from the knowledge of the testers. I agree with the fact that exploratory testing helps in identification of failures in the code and provides new approach in finding defects

 45% of inputs or conditions used in the process were found to be creating failures. I agree with this fact because a considerable amount of time and effort has gone in finding the results.

POINTS OF DISAGREEMENT:

*Although the experiment has found out that exploratory testers were fruitful in finding defects or failures by means of knowledge, serious threats are being imposed on testing domain as the term ‘knowledge used in testing’ has to be more clearly defined. The important questions are that is it possible for only experts to do this type of testing?

Is it possible to do effective testing only if the tester is having previous work experience?

The process of acquiring knowledge is not being clearly mentioned as to how a novice could attain this knowledge.

*The term knowledge is not clearly defined and the process of attaining it is also not clearly defined by the researchers. So i still wonder whether this approach of testing would be technically feasible to implement testing?

*Moreover the exploratory testing cannot be used in industries as a replacement to the existing testing approaches as it would be very costly and requires only experts with many years of experience.

*From the experimental observations it is said that nearly 20% of defects or failures were found incidentally which proves that exploratory testing is useful. My argument is that no other testing approach was used for the experiment other than exploratory testing. So naturally with the usage of experts 20% of defects were found. If the exploratory testing is done as a comparative study with any other approach of testing then we could have a clear cut superior method of testing.

 The fact to be noted is that exploratory testers have more knowledge but can the method of exploratory testing be sufficient enough to function independently as a method of testing owing to its costly nature of requiring experts, time frame etc., are posing serious questions.

CONCLUSION:

The research has taken place with experts and has taken place under controlled manner of producing the necessary results but it has failed to answer the question of defining knowledge of exploratory testers and how to gain the knowledge.

Further research could be a comparative study of exploratory testing with prevalent methods of testing to identify its effectiveness.

REFERENCES:

[1] G.J. Myers, The Art of Software Testing. John Wiley & Sons, 1979.

[2] B. Beizer, Software Testing Techniques. Van Nostrand Reinhold, 1990.

[3] C. Kaner, J. Falk, and H.Q. Nguyen, Testing Computer Software.

John Wiley & Sons, Inc., 1999.

[4] L. Copeland, A Practitioner’s Guide to Software Test

Design. Artech

House Publishers, 2004.

[5] J.B. Goodenough and S.L. Gerhart, “Toward a Theory of Test Data

Selection,” IEEE Trans. Software Eng., vol. 1, no. 2, pp. 156-173, Mar. 1975.

[6] C. Andersson and P. Runeson, “Verification and Validation

in

Industry—A Qualitative Survey on the State of Practice,” Proc.

Int’l Symp. Empirical Software Eng., pp. 37-47, 2002.

[7] S. Ng, T. Murnane, K. Reed, D. Grant, and T. Chen, “A

Preliminary Survey on Software Testing Practices in

Australia,”

Proc. Australian Software Eng. Conf., pp. 116-125. 2004,

[8] E. Engstro¨m and P. Runeson, “A Qualitative Survey of Regression

Testing Practices,” Proc. 11th Int’l Conf. Product-Focused Software Process Improvement, 2010.