Agile and Critical Systems

Introduction

Whilst researching this author’s first article, one fact came up time and again – Agile is not suitable for critical systems and is rarely used, if at all, in this area.  Countless experts made the point that the Waterfall methodology, with its emphasis on documentation, upfront planning and analysis and defined phases, is a much better option.  This made the author curious.  What is it about Agile that makes it a bad option?  Is there anything that can be done to improve its chances of being taken seriously as a methodology for critical systems?

What is a critical system?

Before starting to look into how Agile could (or could not) be used, it is prudent to define exactly what a ‘critical system’ is.  A critical system, also known as a ‘life-critical’ or ‘safety-critical’ system, is a system in which failure is likely to result in the loss of life or environmental damage.  Failure can be considered to include both catastrophic failure of the system or mere malfunctions.  Examples of critical systems include medical appliances, nuclear reactors, air traffic control systems and an airbag system in a car.  The fields that critical systems are employed in are wide, ranging from the examples given in medicine, energy and transport to spaceflight and recreation.

What is meant by ‘Agile’?

‘Agile’ is generally used as a catch-all term for a particular family of software development methodologies.  These methodologies all use the ‘Agile manifesto’ as a starting point but interpret and implement its philosophy in differing ways:

We are uncovering better ways of developing software by doing it and helping others do it.  Through this work we have come to value:

Individuals and interactions over processes and tools
Working software over comprehensive documentation
Customer collaboration over contract negotiation
Responding to change over following a plan

That is, while there is value in the items on the right, we value the items on the left more.

Agile methods include Extreme Programming (XP), Scrum and Crystal.  Whilst these methodologies all implement the principles of the Agile Manifesto differently, they all follow the same principles of teamwork, quality, communication and adaptation.

Current practice in the development of critical systems

As a result of the potential consequences of a failure in a critical system, there is a requirement for very high reliability.  This requirement and the need to be able to demonstrate a high level of confidence has led to regulatory standards becoming common within industries that utilise critical systems.  For example, the development of software within the aviation industry in Europe must adhere to the ED-12C standard from the European Organisation for Civil Aviation Equipment (EUROCAE) (referred to as DO-178C in the US).  IEC 62304 is a standard specified for the development of medical device software from the International Electrotechnical Commission (IEC).   However, whilst these standards define strict frameworks for the development process, they are not prescriptive.  The organisations or teams involved in developing critical systems are free to choose their own methodology, so long as the activities and tasks specified by the standards are implemented.

This need to meet standards has led to heavyweight methodologies, such as Waterfall, dominating software development in critical systems.  These processes, with their focus on upfront analysis and design and defined phases, are viewed as better suited to the process.  The standards also generally require that records are kept (to ensure traceability) and again, with their emphasis on documentation, heavyweight development processes are a natural choice.

In addition, the single, large implementation at the end of a heavyweight development process allows for safety certification to occur once on the completed critical system.

Why use Agile?

There is not an existing Agile methodology that could be safely used unaltered in the development of critical systems.  However, these is no reason why certain parts of the Agile methodology could not be selected and incorporated into working practices.

There are several of Agile’s principles or consequences that would be a natural fit in the development of critical systems.  Quality is the central factor in critical systems so Agile’s focus on improved quality would only be an asset.

Agile’s focus on developing and testing iteratively means that problems are identified much earlier than would be the case in a heavyweight methodology.  As a result, the risk of defects in the end system and their potential consequences is reduced.  It also removes the risk of human error in a long and complex testing phase at the end of the project.

Also, the Agile practice of continuous integration has benefits to critical systems development.  Integration occurs much earlier in the project lifecycle and with less sizeable and complex components.  This allows immediate feedback to the developer and rapid corrective action to be taken.    Continuous integration also reduces the possibility of reaching the end of a project to discover a fundamental issue with the software developed.

Any downsides?

Some of the Agile practices would have a detrimental impact on the development of safety critical systems and need to be discarded.

Firstly, documentation is a crucial element of critical systems development.  This means that Agile’s focus on minimal documentation is not a good fit.  The certification system demands that each decision and design is ‘traceable’.  This requires extensive documentation.  As the maintenance of critical systems is almost as important as their initial development, documentation is also an important tool when maintaining critical systems.

There is also the issue that the development of critical systems can take many years and it is likely that staff will move on and new staff will join the development team.  Again, documentation is crucial in dealing with this.

There remains a need to have a large part of the design completed upfront.  This is required in order to give fixed requirements so that certification and safety analysis can be carried out early in the project.  This would mean that a critical system project that utilised Agile methods would still require the dedicated upfront design period to produce architectural models and functional requirements.

Whilst iterations would be possible and add benefits, they would have to be changed slightly from traditional Agile iterations.  Each iteration would need to produce evidence that its output was fundamentally safe for certification purposes.  However, as the iterations in a critical systems project would not contain much, if any, design or analysis work, adding safety certification to the iteration’s acceptance criteria need not impact on their frequency.

Refactoring is another Agile practice that would not fit naturally with the development of critical systems.  If code is refactored on a critical system, it has the potential to invalidate previous certification or security analysis.  This would cause extensive rework and would need to be avoided, whenever possible.

Conclusion

Overall, despite the added complexity and demands of a critical system development, it appears that Agile methods could be adopted successfully by the industry.  If chosen carefully, the benefits would be tangible and, despite the concerns, actually increase the chance of developing a stable and useful critical system.  The real challenge is in choosing the practices to adopt and creating an environment within the organisation that ensures the successful integration of these practices.

Bibliography

Sommerville, I.  (2007).  Software Engineering.  Harlow, England: Addison Wesley

Ge, X., Paige, R. F., and McDermid, J. A. (2010). An iterative approach for development of safety-critical software and safety arguments. In Proceedings of the 2010 Agile Conference, AGILE ’10, pages 35–43, Washington, DC, USA. IEEE Computer Society.

Sidky, A. and Arthur, J. (2007). Determining the applicability of agile practices to mission and life-critical systems. In Proceedings of the 31st IEEE Software Engineering Workshop, SEW ’07, pages 3–12, Washington, DC, USA. IEEE Computer Society.

Heimdahl, M. P. E. (2007). Safety and software intensive systems: Challenges old and new. In 2007 Future of Software Engineering, FOSE ’07, pages 137–152, Washington, DC, USA. IEEE Computer Society.

Lindvall, M., Basili, V. R., Boehm, B. W., Costa, P., Dangle, K., Shull, F., Tesoriero, R., Williams, L. A., and Zelkowitz, M. V. (2002). Empirical findings in agile methods. In Proceedings of the Second XP Universe and First Agile Universe Conference on Extreme Programming and Agile Methods – XP/Agile Universe 2002, pages 197–207, London, UK, UK. Springer-Verlag.

Cawley, O., Wang, X., and Richardson, I. (2010). Lean/agile software development methodologies in regulated environments – state of the art. In Abrahamsson, P. and Oza, N. V., editors, LESS, volume 65 of Lecture Notes in Business Information Processing, pages 31–36. Springer.

Douglass, B.P., and Ekas, L. (2012). Adopting agile methods for safety-critical systems development.  IBM. http://www-01.ibm.com/common/ssi/cgi-bin/ssialias?infotype=SA&subtype=WH&htmlfid=RAW14313USEN

Turk, D., France, R., and Rumpe, B. (2002). Limitations of agile soft- ware processes. In Proceedings of the Third International Conference on Extreme Programming and Flexible Processes in Software Engineering (XP2002), pages 43–46. Springer-Verlag.

Scripting languages and novice programmers – Responce Article

This article is a response to “On scripting languages and rockstar programmers”  by .

Introduction

The original article describes scripting languages and makes some good points about how the use of them is an advantage when having novice programmers. In my opinion though, scripting languages are more often a dangerous tool in the hands of an inexperienced programmer rather than low level languages. Additionally, I would like to debate the advantages of compilation over interpretation as I think it is a very relevant and overlooked dimension to the topic of language choice in a project.

Scripting is easier right?

The author states that for a novice programmer is that it is easier to write efficient code in higher level scripting languages. However, I find the case to be that a programmer needs to have deep knowledge and understanding of a scripting language before he is able to produce any truly efficient code in one. In such languages a single line of code might raise complexity by an order of magnitude, but a programmer who doesn’t know how each command is implemented under the hood won’t know why his software suddenly became slow. In contrast lower-level languages, in which each line corresponds to a machine code instruction, are more straightforward to work with and thus harder to make such mistakes in.

In the article it is also argued that it is hard to write in a low level language because programmers need to have experience with manual memory allocation and pointers. This is indeed the case with C, but nowadays C is not the standard for system level languages. In fact most modern low level languages are quite different. Java and C# for example take care of memory allocation and garbage collection tasks automatically.

Scripting and not interpreted, how?

It is stated by the author that scripted languages are interpreted. While usually this is the case, the truth is that languages themselves are can be both compiled and interpreted. A great example of this is Java [1]. We usually think of Java as a compiled language but can also be interpreted through the use of bsh (BeanShell). In fact, Java isn’t actually a compiled language, in the same sense that C or C++ are. It is compiled into what is called byte-code and then interpreted by a JVM which can do just-in-time (JIT) compilation to the native machine language. In fact, many modern compiled languages are not completely machine code based and most interpreted languages are actually compiled into byte-code forms before execution. My point here is that the landscape of programming languages has evolved to such an extent that the compiled/interpreted categorisation of a language starts to become irrelevant. That being said, there is a valid topic of whether compilation or interpretation, in general, is more suitable for a large scale task and I trust that this is a very relevant extension to the language choice debate.

But how is this interpretation – compilation thing relevant?

I do believe that the topic of compilation versus interpretation is important and especially when considering the case of large scale projects I believe there are significant advantages to having compiled code over interpreted. Firstly, native applications produced through compilation are more secure as it is usually impossible to generate source code from an executable [2]. This is a security vulnerability of interpretation that one must take into account. Secondly, by using languages that are compiled we pay the cost of compilation only once and in turn get a fast and efficient executable. On the other hand, interpretation comes with a high cost [2] of execution because the program needs to be parsed and interpreted every time it is run. Another disadvantage is that in large complex projects identical code will surely exist and will have to be interpreted and optimised twice if an interpreted language is used. This might not make much difference in a small project, but might be what makes or break a product in a big project.

Conclusion

To conclude, there are cases where scripting languages are a better choice and other cases where system level programming is preferable. Since we are discussing large scale projects however, I believe there are more advantages to be gained with the use of lower level languages and compilation rather than scripted ones and interpretation. While it is true, that as languages evolve the differences between the 2 models have become smaller, I find that it is still safer to use a low level compiled language, even when having to deal with novice programmers on a given team. Moreover, in cases where a high level language must be used, using compilation should remain a priority. As for when the level of the team is high, then the combination of the two approaches would likely produce the best results and eliminate disadvantages [3] of either method.

References

[1] http://stackoverflow.com/questions/3265357/compiled-vs-interpreted-languages

[2] http://www.codeproject.com/Articles/696764/Differences-between-compiled-and-Interpreted-Langu

[3] http://publib.boulder.ibm.com/infocenter/zos/basics/index.jsp?topic=/com.ibm.zos.zappldev/zappldev_85.htm

Response to “Hack and Slash” by s1371464

Introduction

This article is a response to Hack and Slash by s1371464.

The author of the article states that, based on his or her experience, it’s possible to save a late project by reducing the number of developers in the team, and claims that this is a preposterous but real proposition. I agree with the author’s proposition, but would like to clarify and explain the statements made, showing why the proposition is not preposterous at all.

Brooks’s law

The author states that the reasons for the project failing to meet the deadline, were:

  • too many developers assigned to the project;
  • a large code base consisting of heavyweight components.

Both of these are simply the consequences of Brooks’s law, as presented by Frederick Brooks in the book “The Mythical Man-Month”. Brooks’s law can be summarised as: “adding manpower to a late software project makes it later” – that, although the work to be done can be split among the new developers, the complexity of coordinating and merging all of the work will slow the progress of the whole project down.

Too much manpower

This simple law also applies to the situation detailed in the original article. The planning and estimation for the project were done inadequately: there was too much manpower assigned to the project from the beginning. The law is supposed to be applied to projects which are late, and obviously the project mentioned in the author’s article could not be late on the first day of its existence, but over time the unneeded complexity of coordinating work among a large number of developers clearly caused the project to miss its deadline.

The main factor of the Brooks’s law is communication overhead. With more people working on the same project, it takes more time to figure out where your place in the project is and what others are working on. Clearly, this was slowing the project down, as cutting down the team much improved the outcome.

Code base size

The other aspect discussed in the article, that caused the project to miss its deadline, was flaws in the project’s architecture. Namely, the software consisted of heavyweight components and the code base of the project was too large. This is also a consequence of communication overhead. As maintaining many inter-relationships across a large team is hard, developers tend to cluster into smaller groups which end up responsible for only a specific component of the project. Developers are well aware on what everybody is working on inside that cluster, and they believe they do not need to care about what is going on outside their own circle. Unless there are some pre-agreed notions about the architecture of the project, each of these developer clusters ends up building a very heavyweight component, to the extent that different teams sometimes end up with different implementations of the same functionality.

Conclusion

The problems that the author encountered are clearly the consequences of Brooks’s law. Projects with too many people assigned have a large communication overhead, which results in missed deadlines along with a flawed architecture. Since Brooke’s law is a well-known software development principle, I do not consider the author’s proposition to be preposterous, but just a simple observation.

Response to “Why Pair Programming Is The Best Development Practice?”

This is a response article to “Why Pair Programming Is The Best Development Practice?”.

Introduction

The author [1] made a very good job describing pair programming and demonstrating advantages and some disadvantages. However, the original article [1] was too optimistic about the benefits of pair programming without taking into account some important problems that may arise.

In the following paragraphs we will discuss about advantages and disadvantages of pair programming followed by a conclusion at the end. Most advantages are already mentioned in the original article but in this response article they will be summarised trying to underline the most important arguments for pair programming. The main contribution of this response article is the discussion about disadvantages. Many negative aspects of pair programming are not mentioned in the original article, so there was a need to provide some extra arguments against pair programming in order to have a broader range of ideas about this programming technique.

Discussion about advantages

Most pair programming advantages are already mentioned in the article [1]. Pair programming can be a very good way to reduce errors and bugs in code, as two pairs of eyes are always more efficient than one. Moreover, two programmers can solve a demanding problem more easily [3], especially if they combine different skills and programming techniques. Teamwork ability, communication and cooperation skills are also improved by pair programming. Pair members learn how to work together under any circumstances and find solutions to any problems under consensus. This is very important especially for large-scale projects where teamwork and cooperation between group members is essential. So, pair programmers have already an advantage when working in such projects as they can handle the situation more easily. Finally, pair programming is an excellent way of exchanging knowledge and tutoring inexperienced programmers. A young programmer can be benefited when programming alongside an experienced one, thus helping him improve his skills much faster.

Discussion about disadvantages

Pair programming may be advantageous for many people, but not for everyone. Many people are more productive when programming on their own than working in pairs. The reason for that is that they do not like to think out loud or they just need some time to read code and think on themselves. And for those who claim that communication skills can be improved and that everyone can become familiar with pair programming the answer is simple: Not everyone can have excellent communication skills in the same way not everyone can become an excellent programmer. Therefore, not everyone can be equally productive when working solo and when working in a pair.

Moreover, pair programming requires an excellent synchronisation between pair members, even in the simplest things. Both pair members have to start and stop working exactly at the same time; they should have break together and they should get their day-offs or holidays together [2]. So, what is going to happen if one of the two has to be absent for a few days? If the other guy keeps working without its pair, then that kind of programming will not be very…pair. On the other hand, supposing that another programmer comes as a replacement for the absent guy, it will take him some time to adapt to the project and become familiar with his new pair. And what if there not available programmers to replace the absent guy [3] or all of then work with their own pairs? So, pair programming in some cases is ineffective as a number of problems may appear.

Conclusion

In conclusion, pair programming is something that everyone should try. Some may find it extremely interesting or helpful; some others may hate it. This is absolutely reasonable as every person has a unique personality and a unique coding style. However, the reason for writing this response article was to point out some negatives aspects of pair programming that were not covered in the original article. In my opinion, the author has overrated the benefits of pair programming while he/she has not taken into account some problems that may arise. I personally believe that pair programming can be very useful in some circumstances, such as when tutoring an inexperienced programmer or when writing a crucial part of a program and more than one persons are necessary in order to find the optimal solution. But in most cases coding is a lonely job…

References

[1]. https://blog.inf.ed.ac.uk/sapm/2014/02/17/why-pair-programming/

[2]. http://mwilden.blogspot.co.uk/2009/11/why-i-dont-like-pair-programming-and.html

[3]. The Costs and Benefits of Pair Programming – Alistair Cockburn, Laurie Williams – Extreme programming examined – 2000 – ISBN:0-201-71040

Response to Article: “Earned Value Management is not a mathematical game” by s1335336

This is a response article to “Earned Value Management is not a mathematical game” by s1335336 (https://blog.inf.ed.ac.uk/sapm/2014/02/11/earned-value-management-is-not-a-mathematical-game/)

Paper Summary

In the article, firstly, the basic concept of Earned Value Management (EVM) is introduced. Secondly, a practical project case is presented in an interesting way, to give a more comprehensive explanation of the use of EVM in real-life situation, and how EVM benefit the whole project management process. Thirdly, factors that give negative impressions on using EVM are mentioned. Fourthly, considerations of how to improve EVM performance in terms of its three basic elements- Planned Value (PV), Actual Cost (AC), and Earned Value (EV), are discussed.

Merits and Drawbacks of the Article

I agree with the author that EVM does benefit a lot in the process of project management. As we know, EVM provides an estimation of whether the cost and performance of project is acceptable or not, by comparing and calculating PV, AC, and EV. Of course, many other methods can also be used in measuring project performance and progress, such as Function Point Analysis, Cost Estimation Methods. However, many subjective human judgment errors can be made when using those “other methods”, causing inaccuracy of the estimation of the progress and cost. EVM can measure project performance and progress in an objective manner, providing a more reliable result of the estimation. Using PV, AC, and EV, the project progress can be expressed in a mathematical way; numbers are the evidences of every conclusion.

I agree with the author that, EVM provide a mathematical way to estimate the cost and performance of projects, but it is not a mathematical game. Numbers do present a great many facts of the project that can rely on to make proper changes to the project plan; however, numbers are not everything, they are only valuable under the support of facts. For example, if an important step of the project is carelessly treated, or the feasibility of the project is too low, the project will still be a disaster although the mathematical result seems to be graceful.

I also agree with the author that the performance of EVM is not one hundred percent successful, many factors can lead non-ideal results, such as bad project planning and tracking, and lacking of reliable data from the project.

Besides, I think this article can be improved in the following aspects.

On one hand, the article present how EVM is made of good use in the example. I think a summary should be added to conclude how EVM benefit the whole project. As far as I can see, from the article, it can be summarized that EVM can give a project better investigation about its status. By using EVM, the project tracking and control process, as well as the decision making process can be better implemented. Whenever the project progress or the actual cost is unacceptably biased, EVM can gracefully help to analyse the situation and come up with the relative solutions to mitigate the situation and reduce the overall project risk.

On the other hand, I think methods to measure earned value is also worthy of mentioning. Although the relevant calculation of EVM is not complicated, accurate measurement task of EV is not that easy, and is the key point of EVM. There are some common measurement methods of EV [1]. (a) Linear growth measurement: The overall cost will be allocated proportionally to different part of the project, but in each sub-part, the cost will be equally allocated. EV is recorded according to the completion percentage. (b) 50-50 rule: 50% of the cost is recorded at the beginning of the task; the other 50% will then be recorded at the end of the task. This method is more suitable when the task has multiple sub-tasks. (c) Quantity measurement: The cost is allocated equally to each small part of the task. EV is recorded according to the number of the completed parts. (d) Node measurement: Divide the whole project into multiple nodes and assign each of them an EV. When a node finishes its task, the EV of the task is recorded and added to the overall EV.

More about EVM

Based on the research I did about EVM, I’d like to mention more about it, to stress some ambiguous points.

First, if a project has a valuation of planned work, progress and budget, then EVM is a good choice for this project. [1]

Second, EVM can always decide what is going to be done in the future work of the project; this is the reason why EVM is not only a tool for control and tracking, but also a tool for planning. [3]

Third, huge amount of data of the project needs to be recorded in EVM process. When the data comes from different teams, different departments, or even different companies, the situation is harder. As a result, in order to make good use of EVM, teamwork and communication is of great significance. [1]

Fourth, EVM needs to be used together with other project management methods. Relevant methods can make great improvement for the performance of EVM, such as Gantt chart and milestone chart. [2]

Fifth, trying to re-define the process of EVM is not a good idea. Changes in EVM can affect a great deal of the project progress, or even bring the project into a disaster. [3]

Conclusion

In conclusion, I appreciate the usefulness of EVM in project tracking and control, as well as the accuracy of the mathematical results. Project risk can be reduced in a good manner by using EVM. What’s more, I added some extra comments on the measurement methods of EV, which is not as easy as other related calculations of EVM. Furthermore, I mentioned some considerable points in the process of EVM.

References

[1] Fleming, Quentin W., and Joel M. Koppelman. Earned value project management. Project Management Institute, 2000.

[2] Ferguson, J., and Karl Heinz Kissler. Earned value management. No. CERN-AS-2002-010. 2002.

[3] http://blog.sina.com.cn/s/blog_665509880100ixia.html

“Design Patterns from a Junior Developer perspective” response article

This entry is a response to the “Design Patterns from a Junior Developer perspective” blog post written by s0954168 [1].

In the article, the author explains how junior programmers may have difficulty in understanding and using design patterns correctly. Additionally, significant portions of the article are backed by experience of the author and the attempts that were made to learn and apply these common techniques used to aid programmers solve a given problem.

Introduction

Overall, I agree with the vast majority of things said in this article. Coming from a similar background, I can appreciate the apparent problems of fully understanding the benefits of a particular design pattern that inexperience brings. However, as is implied by use of the word “Junior” in the title, I feel that the article has not focused on the major problem of truly appreciating these patterns: experience (or lack thereof).

 

Origin:

The author talks about the origin of the design patterns by mentioning “countless systems implemented in the past” and “somebody has already solved this problem for you”. While this is makes perfect sense as a logical explanation, it misses out on the finer detail. When clever solutions to existing problems have been used and this solution could be abstracted out, one could reuse this solution to discourage other people from making the same mistake and more importantly, to have a common understanding of a solution. So the formal creation of the pattern comes after someone has already used it and has determined that it is applicable in a general sense of solving a particular problem. The more people that are currently working in a particular frame of reference, the more solutions may potentially be created. Thus, the popularity of a programming paradigm could influence the amount of design patterns available.

What I am trying to say with this is that design patterns aren’t a completed set of ways to solve a problem. More patterns will be created, especially if other programming paradigms become more popular, meaning that the existing patterns may be tailored to existing problems. The problem with a junior developer is the potential ignorance that will be exposed by the individual by assuming that every problem can be solved with an existing technique.

A related problem is that because of this perceived completeness, inexperienced programmers may try and fit their problem to a pattern, which is the exact opposite of the intended purpose of a design pattern.

I know I am guilty of this. I think most beginner programmers starting out in larger projects, that could benefit from design patterns, are guilty of this too. I love this quote from an answer on stackoverflow [2] which sums up both the opinion of the author and myself: “Novice programmers don’t use design patterns. They abuse design patterns.”

 

Design patterns example

In the article, the author gives an example of a single design pattern: the Singleton. For readers to completely appreciate what the author is talking about in terms of the complexity of understanding where to use a given pattern, I believe an additional pattern would have been a welcome addition. The singleton object is the simplest one, and adding others like Dependency Injection or the Bridge pattern may have been helpful in cementing the case that is the difficulty in knowing when and where to use them.

 

When to use design patterns?

In this section, the author talks about knowing which pattern to use, as well as knowing when not to use a pattern. As the author mentions, using the patterns in every possible situation may lead to increased code and architectural complexity; the one thing that use of a pattern is trying to hinder. Contradictingly, the author mentions that refactoring is a “perfect example” of when you might be able to insert a pattern. I say contradictingly because when refactoring, the goal is generally to clean up a particular solution. Adding a pattern may needlessly introduce more complexity to a solution. However, this may be a semantic issue; I classify a solution that needs alteration in the entire logic (which is what a design pattern may end up doing) as rewriting as opposed to refactoring. Nevertheless, adding patterns when the intent is to tidy up a solution, seems contradictory and is a point where I disagree with the author.

 

Experience:

The one case that I feel the author has failed to mention, and I’ve alluded to earlier, is that experience with these patterns and solving architectural problems is the key to understanding when to use a design pattern. Of course, the title and the article is aimed at junior developers and therefore less experienced individuals. However, the case is never made explicitly that experience matters and so it may be hard for a reader to exactly see why a junior  developer may have more trouble in appreciating the use and abuse of design patterns. Peter Norvig [4] claims in his blog post that just like any other discipline, 10000 hours of work is required before you truly know the domain. A derived conclusion of this is that a programmer needs to spend the same amount of time on general architecture before being able to recognize the applicability of a design pattern for a given problem.

 

Conclusion:

Even though the blog post is about how junior developers can make use of the design patterns, the article could benefit from talking in greater detail about how design patterns help solve a particular problem. Understanding exactly how a particular design pattern applies to a problem, will not only allow a reader to see the difficulty in choosing a pattern, but also why an inexperienced programmer may abuse them. All in all, I agree with the author on grounds of personal experience (I also attempted to use a design pattern for EVERY problem I encountered) and would like to commend the author for writing a post on a topic that every novice programmer should have on the frontiers of their thought when starting out in large-scale project development..

 

References:

  1. https://blog.inf.ed.ac.uk/sapm/2014/02/14/design-patterns-from-a-junior-developer-perspective/

  2. http://programmers.stackexchange.com/questions/141854/design-patterns-do-you-use-them

  3. http://stackoverflow.com/questions/978489/how-important-are-design-patterns-really

  4. http://norvig.com/21-days.html

  5. http://en.wikipedia.org/wiki/Dependency_injection

  6. http://en.wikipedia.org/wiki/Bridge_pattern

Response to “Analysis eight secrets of the Software Measurement”

Introduction

This is a response to the article “Analysis eight secrets of the Software Measurement” posted by s1301111 [1].

In this post [1], the author divides his article into seven sections based on the eight secrets mentioned in his referenced article [2]. The author talks about the eight secrets of software measurement by summarizing the referenced article and describing his experience and opinions. I think it might be a little bit longer and lack of a main point, so I think some sections could be cut off. This post will mainly discuss one point he mentioned in his article which I think is the most important secret of software measurement, that is “Both establishing and keeping a measurement program are hard”. For other sections he mentioned in his article, I’ll choose some and briefly demonstrate my points.

“It’s not about the metrics”

In this section, I think the main point he wanted to convey is that the measurement is not about the metrics. So, I think what are metrics is not very important, so it’s not necessary to use such a long paragraph to explain “metrics” and even demonstrate six principles to establish good metrics. I agree with his main point and think that what our final goal for the measurement program is, but not just regard it as a tool.

“Success comes from channeling an organization’s pain into action”

The author describes mainly in two parts. The first is that a strong motivation is important for improving and understanding the measurement. Besides, understanding the meaning and difficulty of software measurement is also a vital step for achieving the plan. I agree with his point. A strong motivation can give us strength to take actions. However, we also need to consider the difficulty before we take actions in case the measurement will quickly be found inappropriate.

“Both establishing and keeping a measurement program are hard”

In this section, the author demonstrates his points that both establishing and keeping a measurement program are very important and difficult. I agree with him, and I’ll focus on this section. When dealing with the measurement program, we need to pay attention to establishing and keeping. None of these two steps could be ignored.

1. Establishing

For the establishing step, we need to take some factors into consideration before defining a measurement program. Building an empirical model is very necessary, which means we must identify the technical, cultural, organizational, and business issue [3]. These involve both technical and business goals. I think the technical goal is essential and indispensable. Sometimes we just do a project at school but will not apply it to the market, so technical goal is the most important thing we need to considerate. We need to analyze the procedure and find some resources to define the method. For the company, they always overlook the importance of technical goal and ignore the business goal. However, to successfully establish a measurement program, developers need to first consider the goals in the context of the organization’s underlying business strategy [3]. Because the measurement program may be forced to stop due to lack of funding or have no profit. I think it’s similar with starting an entrepreneurial enterprise. We may come up with lots of entrepreneurial ideas, such us a transnational corporation, or just a small retail business selling some hand-made products. However, before we start, we need to consider if we can make profit, how to find funding, who are the customers and so on. Business goal is a very important factor and it’s really difficult to start due to lots of complicated and unexpected factors.

2. Keeping

After successfully establishing the measurement program, it’s also difficult to implement and sustain. We need to review the measurement program regularly, to ensure if it operates appropriately, and gradual program optimization [3]. If it does not meet the process made in early plan, we need to find the reasons, improve the program, or revise the plan considering the current state.

Other than the technical measurement, business is also an important factor for company to measure if they could continue. We should keep track of it gradually, to avoid the great loss. When I was in the university, there was a special study room that they aimed to give students a comfortable study environment. Because our study room and library closed very early in the evening, students always have no place to study. This company found the opportunity and set up a study room. It provided high speed WIFI, coffee, quiet rooms, group study rooms and so on. Especially, it opened all day. It was really a perfect place for students. We all came there, enjoying the comfortable environment during study. However, we never imagined that it forced to close after one year with nearly fifty thousand pounds loss. Because it supplied such really good conditions, it cost too much money. However, it mainly faced to students who have no ability to afford too much fee for study room. So they can’t make the fee too high and the profit can’t meet their cost. I think it was really a good plan, however, they didn’t keep track of the business state gradually and revise their plan, and finally they had to close the study room when they started to notice the great loss.

Others

For other measurement secrets the author described in his article, I think he just agrees with the original article and doesn’t add something. As his article is a little bit long, I recommend that they could be cut off. To expend, I’ll now provide an additional secret of the software measurement which I think is important.

Measurement is being paid a lot attention from developers. However, it always be ignored by practitioners. However, the practitioners always trust empirical evidence of a measurement without its technical grounding. So, developers should communicate closely with them to understand the valid uses of a software measurement. [4] This gap should be paid more attention to.

Conclusion

In this article, I mainly discuss one key secret of software measurement that we should pay attention to the establishing and keeping a measurement program. In addition, I also give my opinion based on the author’s article that measurement is not about metrics and the motivation is very important, we also need to take it into actions. Finally, I provide an addition secret which I think is also very important that we need to bridge the gap between developers and practitioners.

Reference

[1] https://blog.inf.ed.ac.uk/sapm/2014/02/14/analysis-eight-secrets-of-the-software-measurement/

[2] http://ieeexplore.ieee.org/xpl/articleDetails.jsp?arnumber=207238

[3] http://ieeexplore.ieee.org/lpdocs/epic03/wrapper.htm?arnumber=582974

[4] http://ieeexplore.ieee.org/xpl/articleDetails.jsp?arnumber=582973

Response to article: “Version Control: Important for Individual Projects?”

Introduction

This is a response to the article “Version Control: Important for Individual Projects?”.

The article tackles a subject that should be of great interest to all the students in this course. Each and every one of us should ask ourselves the question in the title. Hopefully all of us who have already finished the implementation part of our thesis project have used version control. The ones that will be doing their project next year should start thinking about the advantages of using version control.

In this article I will emphasise both my agreement and disagreement with the original paper. Where necessary I will support my statement using examples from my own experience.

I have recently finished an Industrial Placement at a software design company. This placement represented my master thesis project and I was thus the only developer. The company I worked for was using version control for every single project and they advised me to do so with mine. After this experience, I strongly believe that version control should be used in any software project including the ones in which there is only a single developer. Actually, as Saikat Basu says in his article [1], version control should be used in “anything that has to do with typing an alphabet on a document”.

Discussion

I agree with the author with respect to the fact that version control is “essential to maintaining an organised structure” by allowing all team members to work on the same source code from different locations. However, it is important to emphasize that this is an advantage of having an online shared repository more than using version control. During my time at the company I was able to see how people on different continents were successfully using an online shared repository combined with version control to work on the same project. Moreover, it was a very good way to share ideas and documentation. Although version control allowed them to keep track of changes and updates, this would not have been possible without a shared copy of the project.

The author states that “These advantages do not apply for an individual developer”. It is not clear if in this context the individual developer can’t take advantage of using version control or an online repository. I believe that having an online repository is useful even for individual projects since it allows you to have a distributed backup. Some people prefer to use the company’s computer while they are at work and their own when they are home. Having an online repository will allow you to work on your code from different systems.

I agree with the author that reverting back to older working versions is very useful for any type of project; individual or teamwork. Throughout my project I was often updating code, improving algorithms and adding new features. Having older versions backed up assured me that even if my changes break the code I could always return to the original state of the project. I personally comment out code that I don’t want to use until I am sure that it is not needed anymore. But I have seen people that simply delete their code before writing the new code so it doesn’t clutter their screens. Source code control allows them to have a clean screen and simply revert an old commit rather than trying to remember what the working code was.

“Keeping track of who did what” is indeed a very useful feature of version control especially if the team is very large. I have seen in the company how easy it was for a developer to track a colleague who made a certain change he was interested in. They were able to discuss the problem, modify it together or even revert to the old code without inconveniencing the entire team. It is true that a lone programmer would not have this problem but there might be the case when you want to share your code with someone else. For example, I shared my code with my boss once to help me fix a bug. Later on I broke his code and was able to check the project history and see where the bugs were introduced or revert to his working version.

I agree with the fact that version control allows you to “expand your project in the future”. In my case version control enabled my boss and his team to continue working on my code after my departure. They can now experiment with new features without interfering with my working code. They can even maintain multiple versions of the developed software product and easily check the differences between them.

It is true that merging multiple changes to the same piece of code using version control “can be a pain and cause aggravation”, but it is better than one person doing all this by hand. I disagree that this is not the case in an individual project since merge conflicts between multiple branches can still happen even if you are the only developer. The best part of being a lone programmer is that you don’t have to over think the merge and it is less likely that you will have to solve the conflict manually.

Yes, familiarising yourself to the version control system is a ‘”point of stress”. Even though the company I worked for had the source code control set up it still took me some time to get used to it. If they didn’t have it set up I would have taken the time and done it on my own since version control does indeed make your life easier.

Conclusion

I think that version control should be used in any software development project irrespective of the team size. For the student working on his thesis project, using version control with an online repository will give him one less thing to worry about: losing or irreversibly breaking code. Moreover, you never know who you will need to share the code with or if it will be used in the future.

References

[1] Saikat Basu, “Not Just For Coders: Top Version Control Systems For Writers”, May 2013, “http://www.makeuseof.com/tag/not-just-for-coders-top-version-control-systems-for-writers/

Response to “Why testing is not search of bugs?”

Introduction

This post is a response to s1368322’s post “Why Testing Is Not Search Of Bugs“. The post in question discusses testers seemingly as a different entity from developers. As such, this response is written on that basis.

In their post, the author makes the case that the main purpose of testing is to miss as few bugs as possible. Whilst, I can agree with this general point, I would adjust that slightly to say that the main purpose of testing is to miss no bugs and also expand that to say that testing is also to ensure that developed software meets its requirements and provide a level of assurance to the client or user.

However, I believe that one of the premises that the post is built on, that there is a difference in testing approach depending on the size of the project, is wrong.

Small-scale vs. Large-scale – is there really a difference?

The author makes the case that the testing procedures and objectives are different between ‘large-scale’ and ‘small-scale’ projects.  This is an argument that I have been unable to agree with.  For any development with a formal testing phase, the same procedures should take place with the same ultimate objectives.  In both cases, the testing phase and subsequent test cases should be designed to cover all available functionality within the system.  The ultimate aim of any testing phase is to provide a development that is as defect-free and reliable as is possible.

Small-scale

The author makes the assertion, for small-scale projects, that the “vast majority of testers are convinced that testing” is a “search of bugs”. Again, I would argue that this is not necessarily the case. It may indeed be the case when discussing developers carrying out unit testings. However, I believe that if dedicated testers are used at the system/acceptance testing phase then most of these testers (or prospective users) are aware that their main role is to ensure that the software released meets its requirements and works as expected.

The author goes on to argue that testers like to find defects as it is a “visual representation of work done by them”. I would agree that there is a sense of satisfaction, on the part of the tester, by finding a defect, particularly an important one. However, if a development was to pass through testing without a single bug being found, most testers would not see this as a failure on their part or an indicator that they had not been working effectively. As long as no defects passed through the testing phase unidentified, I believe that the majority of testers would see this as a successful test phase. It is possible for test phases to be considered outstanding successes with very few defects identified. I would argue that the success of a testing phase is more accurately considered in terms of code and functionality coverage and the proportion of the test cases completed within the testing phase.

The author then goes on to suggest that the ‘least stable’ parts of a development would be tested first. Again, unfortunately, I disagree. When a development is passed to the testing phase, there should be no parts of the system considered less stable that others, particularly on smaller projects. It may be true that certain areas of the test cases may be prioritised but these would tend to be on importance of functionality or complexity of code rather than areas that are less stable (i.e., risk-based testing).

Large-scale

When the author begins to talk about the testing in large-scale projects, I agree with the points made more. In fact, some of the assertions reflect the arguments made above for small-scale developments reinforcing the belief that ‘testing is testing’ irrespective of how large or small a development is.

However, I disagree with the statement “Anyway, there can be a problem with existing functionality, and in most cases, this functionality is not tested properly”. Whilst it would be foolish to suggest that the existing software being integrated into will always be tested fully, in most projects that I have worked in or been aware of, when the need arose, there was a dedicated period of regression testing to ensure that no defects had been introduced to the pre-existing functionality.

The author also states that most of the testing will be completed on parameters that can be expected in normal use. This is a reasonable statement. However, the claim is made that testing non-standard scenarios before all basic features are completed can be inefficient and waste time. This can be true but only within the testing of components. As an example, if a financial software package was being tested, whilst non-standard inputs for a tax calculation may not be tested first, it would make sense to test them whilst testing that tax calculation rather than moving onto another component and coming back to it when everything else had been completed.

Changing the approach?

In the final section on how to change the approach, there is much to agree.  An effective test phase is dependent on well-specified and defined test cases which cover all possible aspects of the functionality and understand how the software will be used after release.  Documentation of test cases is also important for communication within the team, with the developers, management and as reassurance for the client.  It is also important to reflect on the test phase and learn from any mistakes or omissions.

Conclusion

As discussed, my greatest disagreement with this post is the assertion that there is a fundamental difference between testing large and small-scale projects and the testing objectives and procedures for small-scale projects.  The author is correct in how they propose that the testing approach needs to change but I would argue that the proposed practices are already in use in most competent test teams.

Developers are just as Responsible for Feature Creep as Clients

“Feature Creep” is the tendency for extra, unnecessary features to  gradually “creep in” to the basic requirements of a product. It is frequently cited as one of the top reasons why software projects become bloated, run late, and fail.featureCreep

Who is to blame for feature creep? As developers, it is tempting to blame indecisive clients, or project managers eager to please their bosses by promising new functionality. I would argue however that many common behaviours of developers are equally to blame, and that it is our responsibility to alter our bad habits with either self discipline or by advocating for systematic changes in the workplace.

The three key patterns in developer behaviour which I believe contribute to feature creep are:

  1. Boredom, or the tendency to want to work on interesting problems rather than necessary work
  2. Perfectionism and needless attention to detail
  3. Fear of asserting their own knowledge and expertise

Developer Boredom

Most developers I know, including myself, would rather work on interesting technical challenges than mundane tasks. A passion for the difficult can sometimes be a benefit, but often this desire to work on complex tasks or new technologies can bias a developer’s judgement. Sometimes the tasks which are most important to the success of a project are the boring ones such as implementing business logic or adding widgets to the GUI.

boredomThe obvious way to combat this behaviour is to practice disciplined self-reflection. When considering working on some complex algorithm, ask yourself : “Is this really necessary to the success of the project? Or do I just think this is cool?”

This solution may be unsatisfactory for some people. Perhaps they believe that it is impossible to squash a programmer’s natural intellectual curiosity, or that it would be bad for employee morale to  always prioritize the boring-but-necessary tasks. An alternative solution for these people might be to see if your workplace is open to the idea of compartmentalizing  this curiosity.

Many companies (most notably Google) are known for adopting a practice they call “20% Time“. This is where employees are given a small percentage of their work week to work on whatever tasks related to the company that they feel like. The benefits of this policy are twofold: Firstly, employees are more amenable to doing some of the boring-but-necessary bits of software development, as they now have breathing space to explore more interesting technical challenges in their own time. Secondly, If something an employee is working on in their 20% time does turn out to be valuable, the company can reap the benefits.

Perfectionism and Procrastination

The flip side of my previous point is the tendency of developers to get bogged down implementing frivolous or needless features. The most notorious example of this is a developer’s tendency to add “Performance” as a requirement in projects which have little to no performance constraints.

Steve McConnell tells an entertaining anecdote regarding this behaviour in his book Code Complete. He recalls a time when he was sent in to fix a software project which was long over time and budget, but still not even providing basic functionality. Upon demonstrating his completed version of the project to the old engineering team, a senior engineer criticised his implementation for taking over twice as long to run as their original implementation.

At this point, McConnell quipped that the difference was that his system actually worked. If he was to relax that constraint, he could deliver a product that ran in zero seconds!
The point here is that developers will often subconsciously add their own set of requirements without considering their benefit to the main aims of the project.

A systematic way to combat this behaviour is to introduce a technique from the Scrum methodology known as Sprint Iterations. This is where software iterations are broken down into small – usually weekly – focussed tasks. In contrast to gradually implementing features over a long period of time, this technique makes it more difficult for developers to drift onto unnecessary features, as they have a clear short-term goal to deliver by the end of the sprint.

Having the backbone to say what you know

Even with the most competent, focussed developers in the world, people still often work in an environment with clients who are constantly suggesting “one more feature” and project managers who are eager to please by over-selling what can be done in the time available. Many developers I have spoken to believe these to be “the facts of life” and that there is nothing that can really be done about this. I don’t think this is entirely true.workersUnite

In my time working as an intern at Amazon, I was exposed to a lot of their philosophy. One term which came up time and again was the concept of “Learned Helplessness”. The premise of “Learned Helplessness” is that even though developers are frequently in the strongest position to give advice on how difficult a new feature will be to implement, or how likely it is that the current workload will be finished on time, they will often not speak up because of a few bad experiences dealing with management in the past.

The developers at Amazon were determined to not let this happen to them, and made a point of actively “Pushing back”. “Pushing back” was their term for joining together as a group and actively resisting management requests for additional features when they thought that they were unnecessary.

This was perhaps one of the most successful working environments I have experienced because developers had just as much decision making power as the management. This allowed both groups to feel comfortable sharing their relevant knowledge and expertise.

Developers in all workplaces need to have the courage to speak up when they think that the project scope is running away from them, and shed the notion that “The customer is always right”.

Conclusion

Feature creep is a force that has been responsible for a multitude of project failures, and is often attributed to indecisive clients and failures on the part of management. However, I have hopefully convinced you that we developers often behave in ways that subtly contribute to feature creep. Developers need to be disciplined, take personal responsibility for these behaviours, and campaign to make changes in their workplace which discourage others from contributing to the creep.