Pair Programming, Best Development Practice?

This discussion is a response to s1369981’s post, “Why Pair Programming is the Best Development Practice?”

In their post, the author tries to convince the reader of the benefits of pair programming, highlighting the positives that this agile developmental style brings, such as, increased knowledge sharing and project ownership, stating that:

“Pairing is about collaboration and teamwork, thus it will shine… [in] knowledge sharing and project ownership…”

Although, despite the author’s arguments, I’m not completely convinced of their stance, mostly due to the, exaggerated, claim that pair programming is the overall best development practice in all scenarios of programming development.

I think that while there are certain situations when it can be beneficial, (between two novices) extending this to every programming situation is not significantly justified.

The author doesn’t see this because they neglect to address:

  1. The economic downfalls of the practice.
  2. Developers’ natural opposition to this working style.
  3. The fact programming might not be so mundane, if you enjoy where you’re working.

These counter arguments that I present will show that the author has failed to consider these important points in their article and that they’ve been too general in their argument that pair programming is the best development practice for all situations.

But Work is Boring?

In the post, s1369981 makes certain claims, that I’m not particularly in agreement with, such as:

“… most programming tasks in a software company will focus on boring and repeatable tasks with very few of them being creative and challenging.”

This pessimistic view of what the programming world after university is like tends to suggest that the only hope for a programmer to have an enjoyable time is to pair up, therefore distracting you from your “boring and repeatable tasks”.

This solution of improving your enjoyment at your job would only ever be a temporary one, as the novelty of pair work wears off.

Finding a more exciting company according to your personal tastes in programming would help you to enjoy you work more, without needing the distraction of a partner to making it bearable. Also, by simply increasing your communication amongst members in the team, working on different projects, would increase team spirit and cooperation and make it feel much less like you’re working on your own.

I’m stuck!

Speaking from personal experience, while on my internship, I found that instead of any pair programming scenarios, the newcomers (or contractors) to the team sought out the help of more experienced senior developers when stuck, rather than pairing up with them while programming.

This practice produced similar benefits of a senior developer working with a novice, in that the more experienced developer could pass on valuable knowledge and use their expertise without feeling restricted by having to effectively babysit this new employee.

This also left the senior developer with time to apply their invaluable knowledge elsewhere by programming solo, where they would be able to maintain their high productivity. [1]

As mentioned before, having a pair programming situation amongst two novices or a novice and someone who is competent would be helpful because, on their own, they’d undoubtedly have a low production levels but together they can boost their learning levels and this allows new recruits to get up to speed quickly. [1]

Economics

Something not mentioned in the author’s article is the economic viability of mass pair programming, as the team would need to have more employees to manage the same amount of projects.

In controlled studies it was found that it wasn’t economically viable as only for simple systems was a significant decrease in development time found and no significant difference in correctness of solutions. [2]

In fact, in this large empirical study, Arisholm et al. found that the results did not support the general consensus and that:

“… pair programming reduces the time required to solve tasks correctly or increases the proportion of correct solutions.”

Instead, they discovered that, in general, there is an 84% increase of effort required from the programmers to perform the tasks prescribed correctly, where effort (or cost) is the total programmer hours spent on the task.

These empirical results give us a more concrete measure of the benefits of pair programming amongst a variety of levels of programmer and I believe this evidence to be more reputable than remarks from people who’ve tried out pair programming, as this is open to bias.

The findings back up the reasoning that for a team to be operating at the same level as they are currently, managing as many different projects as they are, they’d have to hire more employees to maintain this level of output even when the benefits of pair programming aren’t so great.

It ain’t all fun

The author’s conclusion takes a simplified view of the situation by suggesting it should be adopted because:

“Pair Programming is the best development practice, because it is fun!”

But as suggested earlier in the article, by the author, there is a lot of strong opposition to this with people arguing adamantly against this belief. [3]

So, certain people will not work well in pairs, no matter how many statistics or studies you throw at them and I believe that if it is going to be used in a team, it should be tried out for a certain period where productivity can be monitored.

As mentioned and described by s1369981, people should be also be educated in how to properly undertake the pair programming developmental process if they’re going to be working with it and this can help to eliminate common mistakes and incorrect assumptions made about the practice.

Once the practice has been carried out correctly, the management can get feedback from it both empirically and from the developers who tried it so that they can make a reasoned decision on whether it is a viable option for your team.

Here, developer input should be considered closely because regardless of whether it makes your programmers more productive, making them uncomfortable in their work environment will cause some people to quit.

Conclusion

There are some points in s1369981’s article that I agree with, such as, the fact that pair programming can increase knowledge sharing and project ownership in a team.

However, the application of pair programming to all forms of development is an overstretch due to the economic downfalls, some developers being opposed to paired work and the argument that only pair programming can make your job enjoyable.

I do believe that it still has its place e.g. between two novices in a company or for complex tasks, as it can help to improve correctness of code, but bear in mind that this comes at a price: overall increased effort. [1] [3]

Therefore, any adoption of pair programming should be evaluated on a case-by-case basis to see if it really is the “best development practice”.

References

[1] – Derek Neighbors, “Should Senior Developers Pair Program?”, November 2012, http://derekneighbors.com/2012/11/should-senior-developers-pair-program/ [Accessed on: 26th February 2014]

[2] – Erik Arisholm et al, “Evaluating Pair Programming with Respect to System Complexity and Programmer Expertise”, 2007, http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=4052584 [Accessed on: 26th February 2014]

[3] – Matt Ervin, “Pair Programming (give it a rest)”, November 2013, http://peniwize.wordpress.com/2013/11/17/pair-programming-give-it-a-rest/ [Accessed on: 28th February 2014]

Continuous Integration: Software Quality vs Resources

This is a response article to “Why don’t we use Continuous Integration?” [1] by user s1367762.

Introduction

In his post [1], the author describes his own working experience in a small start-up company with mostly one developer at a time working on different small projects. As not all projects were using Continuous Integration, the author states a variety of reasons why not every software engineering company and project use it so far, based on his own experience.

This post discusses these various arguments that may prevent a company of implementing Continuous Integration for a software project. Generally, it is always a trade-off between software quality and resources.

Commits Frequency

The first point is that it may be difficult to make small incremental commits, when a change requires more time to make the whole software work again eventually. Commits in between would therefore break the build and the whole Continuous Integration pipeline.

I absolutely agree with that argument. When for example a new module is developed, at first the whole software may be broken. Not committing to the main line at least daily contradicts the basic rules of Continuous Integration; see for example [2] by Martin Fowler.

However, is that really a problem? From my personal business experience, it is very common and easy to implement a new feature in a new version control system branch, often referred to as a feature branch. This branch may have own Continuous Integration processes to ensure the quality, but in the first place this does not break the main build. When the feature is in a stable condition, the branch can be merged into the main product and be part of the general CI process. Martin Fowler’s blog post [3] describes this process in more detail.

Mainline Only

The second point mentioned by s1367762, is that there may be code that is not really part of the software project, for example used only by a few customers for very special use cases. Therefore, it does not make sense to commit this code to the main line as suggested by Continuous Integration.

I absolutely understand this point. However, if there is some code that is not really part of the product, there is no need for Continuous Integration for these few special modules. From my point of view, CI can be implemented also when ignoring such modules.

Automated Tests

I absolutely agree on this point, especially when dealing with GUI components, automating Tests is time-consuming and difficult. Furthermore, without having good code coverage Continuous Integration is less effective. However, it is better than no Continuous Integration at all. Also, this is clearly a trade-off between saving time not automating tests and final software quality.

Appropriate Timing, Direct Costs and Project ROI

In these three points the author states that it is more difficult to implement CI into an existing project that started without it. He furthermore describes the costs of learning to implement CI and operating build and test machines as expensive. Finally, he contends that implementing Continuous Integration is not worth the effort for short term project without long term maintenance.

All these points are completely understandable. To my mind, they all lead to one question for the project manager: How important is the quality of my project? If it is not a major requirement, for example if the software is being used only for a short period of time, Continuous Integration is not worth implementing.

Summary

In summary, s1367762 demonstrates well why Continuous Integration is not always a good idea in software projects. However, especially for the first point regarding commits frequency, it is easy to work around it by using feature branches without completely losing the idea of Continuous Integration. Furthermore, if there are modules that do not really belong to the project, they can be easily ignored for the CI approach. From my point of view, a partly implemented CI is much better than no CI at all.

Finally, everything depends on the management decision if maintaining quality by investing time and money is wanted for a project. The company I worked for never questioned the importance of quality, so Continuous Integration was implemented in sophisticated detail. However, if quality is not a major point in some projects, as s1367762 describes according to his business experience, it is absolutely reasonable not to implement Continuous Integration for some projects.

References

[1] s1367762, “Why don’t we use Continuous Integration? | SAPM: Course Blog,” [Online]. Available: http://blog.inf.ed.ac.uk/sapm/2014/02/14/why-dont-we-use-continuous-integration/ . [Accessed 27 2 2014].
[2] M. Fowler, “Continuous Integration,” 2006. [Online]. Available: http://www.martinfowler.com/articles/continuousIntegration.html . [Accessed 27 2 2014].
[3] M. Fowler, “FeatureBranch,” 2009. [Online]. Available: http://martinfowler.com/bliki/FeatureBranch.html . [Accessed 27 2 2014].

 

 

 

“Conservatism” in project management is more valuable than you might think

The following article is a response to the blog post “Conservatism has no place in project management” by s0952140.

recycle-binIn this post, the author laments that project managers frequently ignore the value of upgrading to a new language, framework or tool.  Their central assertion is that it is almost always preferable to throw out old code in favour of a new pattern, framework or language:

“…You have to make a choice: be conservative, keep what you have, or try the new things. At many times in all projects these moments will arise. Should I keep to what I am doing or should I try something new. Always try something new”

Their reasoning stems from the belief that once a project reaches a certain age, it will reach an unwieldy complexity , at which point you have only two options: Work fruitlessly on improving old code or throw it all out and start fresh. The author believes this is an easy decision:

Chuck your old code. If you had wrote some ground breaking algorithms and whatnot, you can still take that and insert in your new project. You lose nothing, other than the cancer.”

Dealing with technical debt is important, but the author puts forward a radically simplified view of the situation in most large scale, long term software projects. I can think of three reasons why “Conservatism” in project management is valuable.

  1. Working with new languages and frameworks comes with considerable risk
  2. The old software actually works and has been strengthened by years of development
  3. Developers are often biased towards using new technologies even when that technology may offer little benefit to the project

Just to clarify, the author’s definition of “Conservatism” is wide ranging and is defined not only as a bias against throwing out old code in favour of using new languages, patterns and frameworks, but also as a bias against using new tools. I actually agree with the author on the point about tools. Setting up a bug tracking tool or version control for a project takes hours at most, and the benefits of doing so are so enormous that any manager arguing against them is a fool.

For the other aspects however, I believe a small amount of “Conservatism” may have its place.

Dangers of working on the bleeding edge

Using a brand new framework or language for your project undoubtedly comes with advantages: They allow you to take advantage of exotic paradigms or useful features not present in old frameworks. However, adopting a new technology also comes with significant risks.

Firstly, new technologies often have less libraries and tools. Here’s an experiment : Go to Google and type “J” followed by any verb. Chances are that one of the top results is a Java library. If you decide to scrap your Java system in order to leverage the power of the Julia language, you may find that the benefits of the language’s power are offset by the lack of libraries and developers.

Secondly, newer technology is more likely to have bugs and be subject to change. Developers often forget that many new technologies they are excited about are still in their experimental stages, and the framework implementation may be rapidly morphing during the life cycle of your main product. If one of those changes introduces a bug, you may find yourself spending more time fixing the framework than working on your own product. Remember, even popular and stable new frameworks like NodeJS are still technically in beta (version 0.10.26) .

Old software *works* and has a wealth of knowledge

Throwing out messy code is not as costless an option as the author suggests. At every point in the lifetime of a product, the developers will have learned more about the best way to implement such a product. Surely, the author reasons, if we were to throw out all the current complex old code and start again, we would build a far cleaner, maintainable, and robust system.startOver

This line of reasoning is flawed because it assumes that the developers can remember all the lessons learned during the entire life of the product. For any sufficiently large system, it is definitely not the case that you can keep all of the implementation details in your head at once. However, one thing that can remember all these lessons is the code base itself. Much of the code that looks like a “mess” in a long lasting piece of software is actually code that acts as a workaround for browser compatibility issues, guards against a particularly subversive bug, or deals with a specific corner case for some users.

Often, the more likely scenario when starting a code base from scratch is that you will waste time making all the same mistakes again. Sometimes throwing out old code is the correct course of action, but the author’s assertion that it is always the correct action is misleading.

Inappropriate lust for new technology

Programmers are often so excited by the prospect of using a new and interesting new technology that they fail to consider whether adopting that new technology would have any tangible benefit to developing their business’ product. This was exemplified for me by a light-hearted conversation I overheard during my time interning at Amazon:

Programmer A: “<Complaining about an old perl web system> Look at this code – Its a complete mess! We should switch to NodeJS. It’s way more maintainable AND it scales better”

Programmer B :”We’ve have been adding features to this code base no problem, and this system has already scaled to millions of users. What are you talking about?”

Programmer A: “Look – I just really want to use Node <laughs>”

The point here is that a programmer’s motivations for adopting a new technology may not line up with the needs of a project.  While increasing programmer morale definitely has some tangible benefits, sometimes having a “Convservative” project manager to consider the business impact of such a decision (e.g Will changing to this new technology save us money? Will the effort to make a change cost take time away from other tasks?), is of value.

Conclusion

Managing technical debt and adopting new tools and technologies undoubtedly has value, and the author of the original article is right to criticise project managers who may undervalue it. However, the first line of their article is telling: They admit that there may be some cases in which “Conservatism” has a place, but only “Very Rarely”.

Hopefully this article has convinced you that there are several reasons to apply a little “Conservatism” in project management, and that these reasons may be more prevalent in long term, large scale software than the author believes.

Response to article “Design Patterns: are they so difficult to understand?”

This is a response to the article “Design Patterns: are they so difficult to understand” posted by s0943644 [1].

In the article, the author talks about the importance of using design patterns in software development and presents a problem that people are still arguing about the best way to study these patterns. Then the author covers single responsibility principle and factory pattern that he used in real project development and explains the steps he took to understand them.

Yes, the essential part to understand design patterns is to code with them.

I totally agree with the author on this point. For my own case, I ever took the course Software Engineering with Objects and Components. In the course, we covered concepts of design principles as well as several design patterns like factory pattern, singleton pattern, composite pattern, etc. I have to admit that as a student with little experience on software development, it was so difficult for me to understand these abstract concepts.

At that time, I was also developing a game using Java. In the game, users collected gems in a house with several rooms. Players could move between rooms and turn left or right, and were shown with photos from different perspective of the room as if they were in it.

At first, I wrote all the functions in one class and it worked well. However, when I tried to add a new ‘’room’’ to the ‘’house’’, I had to change almost all methods in the class. It was a disaster because the project was in low extensibility and was also hard to read for others. Then I compared the design principles I learnt from SEOC course and reviewed the code. No doubt I found no design pattern was used nor design principle was obeyed. And I also realized that I could use all these principles in my project to make it a better one. For example, one principle is single responsibility, to achieve this, I separated the class into several ones and each was responsible for only one purpose. Another example is, for open/closed principle, I used interfaces and let the class with similar functions implement the interface so that they were open for extension and close for modification. To be specific, one function of the project was to retrieve photos from different sources like Flickr [4], deviantART [5] and use these photos to build the ‘’house’’. After modifying the code, I implemented the function by using a PhotoFinder interface, several classes like FickrPhotoFinder, devianARTPhotoFinder that enable our project to retrieve photos. And letting the class MyPhotoController that implemented the interface decide which class to instantiate. And I realized that this satisfied factory method pattern.

Capture

Figure 1: UML diagram (*if you cannot see it clearly, please click it and view the large one)

After re-developing the project, the quality of the code was highly improved. And I had a much better understanding on how SOLID works in software development and how design patterns are used.

More things to discuss on learning design patterns through coding.

Sometimes people say they don’t have the opportunity or experience to develop large-scale software, and this is a barrier for them to practice and understanding design patterns. However, I think even small projects with few functions can be developed using different patterns. For example, when we write a UI using Java Swing, observer pattern is used when adding a button and registering the button with an action listener.

Another thing I want to mention is, we should not change our design so that it satisfies some certain pattern just because we want to use that pattern. Instead, we use patterns because after taking SOLID principles into consideration, we find our design satisfies some certain patterns. In fact, the principles are easy to remember, and as for beginners with little experience, it would be helpful if we think through all the principles at project designing stage.

Back to the discussion of the effective way in learning design patterns.

Other than coding practice the author mentioned in the article. I would like to talk about another effective way for beginners to learn design patterns, or at least helpful to me-—using real life examples.

Examples are always useful in understanding the concepts. And I think real life examples are more helpful than programming examples. For instance, when studying singleton pattern in the lecture, a given example was that, sometimes the data is required to be held consistently in one version [2]. This is easy to understand, but not intuitive enough. I refer to Wikipedia webpage [3]. It mentions several examples on how to implement singleton pattern and an example on Java Abstract Window Toolkit (AWT). For learners who are not quite familiar with AWT, this was not easy to understand as well.

Don’t we have other good examples? Wait! I thought of one when I was learning this pattern. Thinking about the president in the USA, there is only one person in that position (at least for now). In this example, president is the single instance, getPresident() method ensures that only one president can be instantiated. This example is vivid, it’s easy to understand and is helpful in understanding singleton pattern. In fact, I tried to learn all the patterns in this way and found it was really interesting and helpful.

Capture2

Figure 2: Singleton pattern

Conclusion:

Based on similar experience of learning design patterns by coding, I definitely agree with the author that coding is an essential part to learn these patterns. Besides, real life examples regarding to these patterns are also helpful for us to understand these patterns.

[1] http://blog.inf.ed.ac.uk/sapm/2014/02/14/design-patterns-are-they-so-difficult-to-understand/

[2] http://www.inf.ed.ac.uk/teaching/courses/seoc/2013_2014/schedule/patterns-ho.pdf

[3] http://en.wikipedia.org/wiki/Singleton_pattern

[4] https://www.flickr.com/

[5] http://www.deviantart.com/

Response article: “How to make Software succeed”

Introduction

This is a response to “How to make Software succeed” post by s1263235 [1].

The author initially presents a few cases of software failures ranging from complete failure with the entire project scrapped and resulting in a discontinued service – Google Buzz to project going over budget and delayed delivery – Universal Credit. The author then describes some issues that result in the possible failure of a software project and expresses their view of possible ways to resolve these issues to prevent or reduce the chance of software failure.

The three main criteria that a software project is often judged on in terms of its degree of success are: whether the project was delivered on schedule; whether development costs are within budget and whether the project meets the user’s needs in terms of scope and quality. My opinion is that when any one of these criteria are not met, the software is considered to have failed.

In this response post, I will offer my opinions on some of the main reasons of software failure listed by the author and discuss other potential problems that leads to the failure of a software project.

Project delivered late

The author stated that the delay of project delivery is a reason for the failure of a software project and possible ways to prevent this include careful design of the project; giving up on ideas which are infeasible to implement or following a reasonable schedule. I agree with this reason, as I stated above, this does contribute to the failure of a project. However, I would like to elaborate on the point of “following a reasonable schedule”. In large-scale software development, the schedule is often set by the project management team or even the stakeholders themselves. They usually do not have knowledge of the details of the software project and could set an unreasonable deadline which makes delivering the project on time difficult. A way to solve this problem to is to provide adequate training to the project management team so that they have an understanding of the software project and will then be able to do careful planning at each stage of the project to draw up a more realistic schedule.

 Project ran over budget

I agree with the author’s point that a project which ran over budget is another case leading to software failure. The author states that estimating the budget that should be allocated to a project is a difficult task. To expand on this, since the estimation of the size and effort of a project is very difficult – estimating the size and effort of a project at the early stages is inaccurate (where estimation is actually useful) whereas estimating near the end of the project becomes fairly useless. This means it is often difficult to estimate an accurate budget in relation to the size and effort of the project. The overall result of this inaccurate estimation causes the project to run over budget.

There are some additional points that help reduce the chance of a project running over budget [2]:

  • Limit or reduce the scope of the project: we should concentrate on the main requirements of the project and limit the focus on adding additional features or functionality. However this requires careful planning as reducing scope of the project may not satisfy the user’s requirements.

  • Regular budget financial forecast: by having a budget plan before the start of the project and reviewing the plan on a regular basis during the project reduces the chance of the project running over budget. After conducting budget reviews, if the budget is constrained we may have to take out some functionality and reduce the scope of the project.

  • Resource allocation: when the budget is constrained, we could consider allocating resources to a part of the project that is less resource intensive.

Poor communication

The author also points out that poor communication between the customers and the development team as well as the developers within the development team [3] leads to project failure. I agree with this point, and my opinion is that it is crucial that the developers themselves are clear of their role in developing the software project, they have to work in unity and aim towards the goal of meeting the initial requirements of the project. Good communication allows them to resolve any misunderstandings and ambiguities in technical requirements as well as identifying other problems along the way.

An example that aid the communication between developers is the concept of pair programming in Extreme Programming (XP) methodology. This practice allows developers to work together and understand problems and specifications better. It is also helpful when a new member joins the development team, they get to know the project details when explained by another experienced developer.

Frequent communication between customers and developers are also important in order to capture requirements early on in the project lifecycle. In the XP methodology, the on-site customer practice means a customer is always with the development team.  This means the development team can be notified of any changes to the requirements immediately and plan changes accordingly. The customer can write user stories to allow developers to gain further understanding of the functional requirements.

 

Other possible reasons

I will now provide a few reasons in addition to the author’s reasons of causes of project failures [4].

Inability to cope with a project’s complexity

Development teams that target towards new technologies or a specific industry are often troubled by a software project’s complexity. This is mainly due to inadequate knowledge of the field. The developers may not have any experience developing software for that particular area and the customers may not know exactly what they want from the software which results in incomplete requirements and specifications. This leads to the possible failure of the project.

A possible way to prevent this issue is to draw up a contingency plan which includes possible delays in the project; increase the budget allocated for the project should it require additional resources. However, this will only be helpful in the short-term to compensate for additional costs and delays but will not prevent catastrophic failures.

Use of third-party software components

The problem occurs when a development team uses third-party software components as part of their software architecture in the project. The third-party software components may be untrusted; not fully tested and have poor code quality. Using these components may result in security vulnerabilities and introduce bugs into our system. The poor quality of code will often result in developers spending a large amount of time understanding it rather than writing their own. Maintenance is also a problem, especially when the third-party software components are updated, all software components in our system that depend on those also have to be updated.

Conclusion

In this response article, I have highlighted some reasons of software failures discussed by the author; provided my opinions on those issues as well as added a few other causes of failures. Overall, I agree with the main point discussed by the author that software failure is very common in large-scale projects. However, with some careful planning and following the points described in this article can greatly reduce the chance of software failure.

 

References:

[1] http://blog.inf.ed.ac.uk/sapm/2014/02/14/how-to-make-software-succeed/

[2] http://project-management.com/what-to-do-if-your-project-runs-over-budget-and-how-to-prevent-it/

[3] http://blog.azoft.com/communication-within-software-development-company/

[4] http://blog.azoft.com/preventing-software-development-project-failure/

 

 

 

 

Response to Article: “Architectural patterns for Mobile Application Development” by s1014475

This discussion is a response to the article “Architectural patterns for Mobile Application Development” posted by s1014475.

1.Background

The author mainly talks about two software design patterns: MVC (model-view-controller) and Layered Abstraction, which can be used for mobile application development in the article. The author also uses some mobile applications he developed as examples, explaining how specifically these patterns can be used in a real project. This post will discuss some points the author raised up in his article, giving some more thoughts on them, and at the end of the post, some more patterns which may also be suitable will be listed.

2. Discussion
At the beginning of his article, the author mentioned that mobile applications often require more user interaction compared to its desktop counterpart. The author thought the reason for that is “ (mobile application) usually waiting for an action from the user (such as a button click) ”, which seems not that sufficient. Actually, this point is quite important to the main topic of his article. The comparison between desktop application and mobile application can directly lead to whether a pattern that had been used for desktop application can be used for mobile application as well or not. It could be a good idea to have another session or paragraph talking about the similarity and differences between these two different kinds of applications. Besides, the reason the author gave is also kind of unreasonable, since desktop applications also require lots of responding from users, and more often, desktop applications has more complicated UI constructure. So, this point should be a similarity rather than a difference. And it will make more sense in this way to introduce MVC which has been widely used for desktop application development to mobile application development.

The author then introduced the MVC pattern. He used his own experience on some android application development as examples, which makes it really easy for readers to understand how MVC pattern matches with a project. The author also pointed out that, ”the suitability of MVC pattern depends on the context of the application in question”, which is a good point, just same as there is no silver bullet for software development in general, whether a pattern is suitable or not really depends.

The last point about MVC the author gave in his article is that MVC pattern fits his project well in the high-level overview, but not with closer inspection. The author thought that the definition of a widget in layout file and responding action in java file violates the concept in MVC pattern and then he gives a possible solution for this. However, the author does not make it clear whether it is impossible to fully apply MCV pattern in his project or it is better to use the combined pattern and why is the reason for that.

Then the author talks about layered abstraction. The author made good explanation abut Android system architecture, again, by using his own project as examples. To compare with and make the discussion complete, it’s better to gave some other sample applications developed in other systems, such as IOS and WM6. For example, figure 1 shows the IOS architecture, it’s similar to Android architecture(figure 2). There are two spaces, user space and kernel space. IOS system also has application layer and framework layer, exactly the same as Android. Media and Core Services layer provide the basic services such as graph, audio and video to the user, and these services pretty much has the same functionality as library layers in Android. Finally, core OS, the kernel space include drivers for the devices, it’s same in both systems.

From the comparison of these two systems, we know that most of the mobile systems has a clear architecture and layered abstraction. It is possible for a user to apply this pattern while developing their applications. The author did not mention it in his article, but the main advantages for this pattern is that it makes each component of the system independently, so that it supports large-scale, long-term development and maintenance. Each of the layers can also be reused for other projects. However, the main disadvantage is that when a behavior of a layer changes, it may cause cascades of changing behavior.

blog_2

Figure 1: IOS Architecture[1]

png;base6480a8f0bafa06de18

Figure 2: Android architecture[2]

Finally, except from what the author has introduced, lots of other patterns could also be used for mobile application development, such as: Command, Flyweight, Abstract Factory, Chain of Responsibility, Adapter and so on. Mobile applications has became a more and more important area of software engineering, and applying appropriate patterns while developing the apps will just make things more efficient and effective.
Reference
1.http://developer.bada.com/article/The-Basic-Architecture-and-UI-comparisons-between-bada-and-iOS
2.http://elinux.org/Android_Architecture

“Leave security to the professionals”? We are the professionals.

This discussion is a response to Euan Maciver’s post, “Are We Secure In Our Software Life Cycle?”.

In his article, Maciver deals with the (un)suitability of approaches to ensuring software security. Gilliam, Wolfe, Sherif and Bishop (2003) present the “Software Security Assessment Instrument”: a method of incorporating security checklists and assessment tools into the development lifecycle[1]. Maciver disagrees with Gillam et al., examining why their methodology is unsuitable, incompatible and inefficient, given contemporary software development practices. Instead he suggests that developers should focus not on security (but rather on implementation of functionality), and instead external security experts should be brought in to carry out this work. A counter-argument will be made in this post, exploring why security shouldn’t be left to external specialists and why Gilliam et al.’s proposal is, in actuality, a sensible one.

Security is the responsibility of the developer
Maciver’s argument focuses around the idea that, as software developers, we shouldn’t be the ones responsible for ensuring security is built into the code we work on. His solution is to give developers the freedom to write code without the worry of security vulnerabilities, but instead to pass this on to subcontracted professionals who will perform ‘audits’ of code. Their role, he explains, is to ‘probe’ the software and to relay feedback and results to the development team.

The problem here, however, is that it will lead to developers developing without security in mind. Indeed, Maciver speaks from experience, writing,

“I would be fearful that every line I wrote was incorrect as I hadn’t dealt with secure programming before, whereas, in reality, I was much more relaxed and able to program to the best of my ability”

This illustrates the idea that, as programmers distance themselves from the very concept of programming securely, their code will invariably become less and less secure. Ultimately, it’ll become a massive task to perform a security audit – even for a specialist firm – and the feedback received from these specialists may necessitate such an overwhelming rewrite so as to write-off the project.

Programming securely is a necessity, and yes, third-party auditing firms are a great resource for aiding this, but it’s not something which should be left to someone else. By doing so, you immediately make yourself vulnerable. Maciver cites the example of working for a large financial institution; given that the competition between such firms is notoriously rife, one would need to be absolutely certain of their auditor’s intentions. Can we absolutely rule out the idea of industrial espionage? Backdoors might be left open, and their existence made known to the wrong people at the wrong time.

This, in itself, raises an interesting concern around the difficulties of responsibility, blame, and economics of a dependence on external security specialists: if a vulnerability is discovered whilst the system is live, who is to blame? The developer who wrote the bad code in the first place? Or the security ‘expert’ who missed it? (Of course, blame shouldn’t matter – fixing it should – but the argument (eg. here, here) of publishing “root cause” within a team is an interesting one.)

So the role of these security professionals needs to be well-defined: are they responsible for just identifying vulnerabilities, or are they the ones to patch as well? If they simply identify issues, as Maciver proposes, then I agree to some extent: this would be a potentially viable model for development, allowing programmers to learn from their mistakes through participation in code reviews. But these code reviews should be with the security professionals themselves, which is a two-fold contradiction of Maciver’s model: it places the responsibility of caring about security on the developer, and it means that the security professionals are going beyond their role of simply identifying issues.

Leaving the security audit to be performed after implementation is somewhat reminiscent of the waterfall model itself; almost as if it’s an additional stage after implementation. This puts an immense pressure on the security specialists: what if there isn’t enough time for a full-and-thorough audit to be performed before software is due to be deployed? Or is security prioritised over the likes of acceptance testing? It seems like the best solution is to consider security throughout development, ergo minimising this risk. But this would necessitate that security considerations are made by the developers themselves…

Security can be built into a developer’s workflow
And it should be the developers who are taking security into consideration. Gilliam’s SSAI encourages the use of checklists (which do, of course, bring their own set of problems) – but even the writing of a checklist gets teams thinking about how they can organise the matter of security.

Given the recent trend towards Agile methodologies, Maciver remarks that he “…can’t see how such a security checklist … could fit into agile development style…”. Just as tools such as Jenkins or Apache Continuum have become popular in continuous integration (CI), various security-based CI tools exist; for example, the OWASP Zed Attack Proxy (ZAP) project. These programs typically provide automated scanners for finding vulnerabilities in applications developed with CI in mind. Thus there are, in fact, ways of incorporating the security instruments as Gilliam et al. propose into a team who follow the Agile principles, as well as for those who prefer more traditional software processes.

Indeed, by incorporating security into the everyday workflow of a team, this approach will get developers keeping a mental checklist as they develop; ‘will this line of code flag up as a vulnerability on our ZAP system’? For the inexperienced developer, this may take some time to learn – but it’s a necessary skill, and one which can’t be ignored for the sake of implementing functionality. Arguably, security should be considered right from the beginning of a project, during its initial design. This, unfortunately, could not be facilitated through Maciver’s model, with the audit being performed only on an implemented product. By this point, it might be too late to correct fundamental security flaws, leading to hodgepodge patching right from the first deployment.

Ultimately, Maciver’s idea, as presented, is not fundamentally different from the penetrate-and-patch method he criticises. If audits are left until the software receives “major changes” (and, incidentally, how do we define ‘major’? – even the most minor of changes could create huge vulnerabilities), then this is, in itself, a penetrate-and-patch approach. Rather than the users being the ones to find the issues, they’re simply caught by a different group of individuals (with, perhaps, unknown interests..).

Yes, security audits are beneficial – nay, necessary – in large-scale software development, but they should not be used as an excuse for programmers, designers and testers not to be thinking about security at every moment. Otherwise they’ll become complacent. And complacency breeds mediocrity.

And who wants to be mediocre?
Continue reading ““Leave security to the professionals”? We are the professionals.”

Response Article: In response to “Startups and Development Methodologies” by s0969755

# Introduction

This article is a response to “Startups and Development Methodologies” by s0969755 [1], which explores aspects of software development in early stage technology start-up companies. The article discusses challenges which face small start-up organisations when attempting to implement best practice methodologies or software engineering techniques which result from the limited resources available to the company, drawing on personal experience and anecdotal data gathered from examining large successful start-ups Google and Facebook.

The author presents the idea that start-ups which eschew a development methodology (or adopt an unstructured, laissez faire approach to development) are likely to suffer from a lack of discipline or rigour in software engineering. The author suggests that such organisations may be prone to making ad-hoc additions or modifications to their software without proper regard to the sustainability of the development, and that this may ultimately undermine the goals of the organisation. The author concludes that the adoption of Agile processes (with some exceptions due to pragmatism, such as working week restrictions) may cause the organisations software development to incur some short term costs, but that in the long term will lead to more sustainable engineering practices.

# I Agree With Most of the Article

I broadly agree with most of the ideas presented in the article, and think that some of the concepts explored would benefit from additional context about the role of software engineering in a technology based start-up environment. By drawing from the ideas presented in the Lean Startup [2] methodology, I will consider what the ultimate purpose of a start-up is and how by working backwards from this, best practice solutions can be derived. In this response I will expound on the ideas presented by the original author, augmenting them by examining software development in the specific context of what separates a start-up from a regular software project, and justifying why Agile and Lean practices are appropriate given these contextual constraints.

Specifically, I intend to examine the increased emphasis on gathering metrics and implementing analytics support in software, and examining the role these metrics play in validated learning, (which may lead to pivots), and consequently how to sustainably develop when aiming for a moving target. Additionally, I will briefly present an argument that a start-up need not diverge from established Agile practices such as the 40-hour week merely as a reaction to its constrained resources; it should instead ruthlessly focus on how to meet its current in the most effective (i.e., globally efficient) manner possible. Similarly to the author of the original article, I will conclude that automated testing, continuous deployment and a disciplined approach to development is crucial to give a technology based start-up the best chance of success.

# What Is A Start-Up?

A start-up, as defined by Steve Blank [3], is a “temporary organisation designed to search for a repeatable and scalable business model”. This is a useful definition as it illustrates that a start-up is not simply a small version of a large and established company, it is instead a transient state an organisation goes through when attempting to determine what the organisation is going to do (i.e., what value or service will it provide and monetize). The Lean Startup is a methodology advocated by Eric Ries for helping a start-up organisation to reduce risk by increasing the rate at which the search for a business model can progress, by emphasising the principles of rapid iteration cycles, minimum viable products and “validated learning”.

For technology companies, this will typically result in engagement with potential customers in the marketplace, which will result in the formation of hypotheses about problems these customers face and what viable solutions may be. The organisation will then engage in rapid development of prototypes designed to prove or disprove the hypothesis. Depending on the result, the hypothesis may be reformulated or refined (or discarded entirely) to optimise the produce for different categories of customer engagement. These types of engagement may include activation (did the customer use the product long enough to sign up for an account, or some similar definition?), retention (did the customer come back to the product the next hour/day/week?), virality (did the customer directly or indirectly refer another customer?) or monetization (did the customer pay us?). Depending on the performance of the product in these key metrics, the organisation may choose to persist (further develop the original hypothesis and product) or pivot (change the initial hypothesis in response to lessons learned). The decision to pivot may necessarily result in substantial changes to the implementation of the software.

# Special Requirements For Software Development In A Start-Up

The author of the original article describes the requirement for short release cycles and the frequent phenomenon of changing requirements, but does not go into detail explaining what the purpose of short release cycles are or why rapid requirement changes occur (and the technical implications of both of these). This section will consider in more detail the special requirements placed on the software engineering team in a technology start-up.

It follows from the previous section that the software in a start-up must be developed with a focus on implementing mechanisms for collecting metrics on customer behaviour such that conclusions can be drawn on the hypothesis. The software should ideally be delivered to customers in a format such that it can be rapidly modified and redeployed in response to lessons learned from the metrics gathered. The software should be engineered to be flexible and malleable, such that in the highly likely event of software changes necessitated by either a decision to pivot or persist, the software architecture does not present an unreasonable burden to change.

The implication from these constraints is that the robustness of both the deployment infrastructure and the metrics gathering software is critical. The longer the cycle of forming a hypothesis, implementing the software, gathering metrics from customers, and analysing the metrics to draw conclusions on the hypothesis is, the riskier for the startup. It is then paramount that the deployment infrastructure should not present any unnecessary delay in delivering new iterations of the product to customers. The ideal platform to satisfy this requirement is the web, as new versions of the application can be deployed instantaneously. Rapid deployment of mobile applications is also possible, although with some potential delays potentially imposed by the vendor (such as the iOS App Store acceptance process). A worse case scenario for application of the Lean methodology would be where this is a natural barrier to rapid development cycles, such as the software being delivered on a physical media or embedded device,  due to inherently longer iteration cycles.

Irrespective of the target platform or environment, the method of delivery must be robust and reliable due to the emphasis on short release cycles; this implies a hard requirement for solid engineering in the deployment process, and resources expended here (in terms of automated testing, paired programming, etc.) to ensure a solid foundation will likely be repaid with interest over the lifetime of the start-up.

In addition to a requirement for a solid deployment infrastructure, it is critical that metrics gathering mechanisms be reliable. The gathering of metrics from the usage of the produce forms a crucial component of hypothesise-build-learn iteration cycle, which is the backbone of the Lean methodology. If the mechanism for collecting metrics is not reliably, the team cannot have faith in the lessons they have learned from an implementation of a product iteration. Similarly to the deployment infrastructure, this suggests placing a high priority on expending the resources required to employ software engineering best practices to develop the metrics systems. As well as the requirement for gathering metrics reliably, the organisation must be able to analyse and process these metrics in order to come to the correct conclusion to inform the hypothesis for the next product iteration. This calls for investing resource in the development of robust tooling for metrics analysis.

This section has focussed on constraints that face start-up organisations which do not necessarily affect established organisations as critically. However, start-up organisations are typically resource constrained due to lack of human resources and funding. Therefore it is important to balance these additional requirements to be able to effectively deploy and learn from rapid iteration cycles with the start-up resource scarcity; the next section will consider how a start-up can work to optimise its development given these constraints.

# Organisational And Methodological Considerations For Resource Management

A points in the original article with which I do not necessarily agree is the contention that as a natural consequence of limited resources, a start-up should discard established Agile practice such as paired programming and working week hour limitations. These principles exist because they are considered beneficial to producing software in a sustainable manner, and should not be hastily surrendered if possible. This section considers some methodological measures a start-up can take to make best use of their resources.

As highlighted in the previous section, a start-up depends on its ability to iterate and learn rapidly; these are achieved by a robust deployment infrastructure and metrics gathering and analysis tools. These requirements are potentially expensive to implement and maintain, and typically do not form part of the value proposition of the start-up (assuming the start-up does not, upon recognition of the sophistication of its internal tooling, pivot into becoming a deployment or metrics solutions provider!). In recognition of this, various third party tooling providers exist to fill this niche. Platform As A Service (PAAS) companies such as Heroku can simplify deployment for web based start-ups to free them of the requirement to manage deployment nuances or maintain infrastructure, and metrics services such as Google Analytics, Kontagent, Flurry, etc., can remove the burden of metrics gathering from the start-up, freeing its resources to focus on the problem at hand.

When testing a hypothesis, some system implementation may need to be written. It may seem obvious that only the minimum solution which allows the hypothesis to be tested should be written, but when in the heat of the moment developing software, it can be tempting to build a fully-featured solution, which elegantly and generally solves the problem but may ultimately be wasteful if the hypothesis is proved incorrect. By ruthlessly focussing on implementing the minimal solution required prove or disprove some hypothesis about the product, resources can be most effectively channeled. For example, in a web or mobile start-up, someone may hypothesise that the addition of a feature might increase user engagement in the product. To test this, the feature could be implemented and the resulting user engagement tested. However, a more resource efficient way to test this hypothesis may be to determine some proxy for user engagement, and test for that. A link to the new feature could be added, which does not link to the feature itself but instead allows the user to sign-up to be informed when the feature is added. By measuring the number of clicks and sign-ups, the potential engagement of the feature can be determined and the hypothesis tested for a low cost. If the feature appears to be engaging, it may then be implemented, but if not then the time saved may be significant.

In addition to writing as few features as possible to validate a hypothesis, other constraints such as targeting a single platform may be appropriate. For example, a mobile application may elect to only target iOS rather than Android (for example); this is a valid strategy, as the purpose of a start-up is to search for a scalable business model. Scaling the implementation to multiple platforms should not be attempted until the business case for the product is validated.

When developing features, it may be beneficial to use as high a level language as possible (with respect to any established performance constraints). This will likely allow the most rapid iteration, which is crucial for the start-up. Even considering performance concerns such as scalability, it may still be a better idea to develop with a high level language; again, this is due to the nature of the start-up as a mechanism for searching for a scalable business model, not a scalable technical solution. When scaling technically is required, this is a very string indication that the start-up has validated its business model! As the author of the original article observes, this may have maintenance implications for the software team when the company has become established, such as Facebook’s problems scaling PHP and Twitter’s well publicised switch to the JVM after proving its business model on Ruby [5].

By taking advantage of existing tooling where available to make the best use of limited development resources, the start-up software engineering team will iteratively build towards a complete implementation of a solution. As the author of the original article suggests, this may result in an accumulation of partial implementations, or hacked in features. It is important to keep on top of this technical debt as it accumulates. This can be achieved by using standard refactoring [4] techniques, relying on automated tests to ensure that the implementation remains functional as it is refactored; this may be especially required if the implementation is created in a high level dynamic language amenable to rapid prototyping.

By a combination of leveraging existing tools and solutions where possible, by only implementing the minimum amount of features necessary to prove or disprove a hypothesis, and by selecting an appropriate language and target environment amenable to rapid iteration, a start-up team ensure that the cost of development is minimised. By doing so, it can then invest the resources it has in best established Agile software engineering best-practices such as paired programming and the 40-hour week; this has other benefits, such as building a team of generalists who have a shared ownership of the software, and promoting sustainable development.

# Conclusion

In this article I have presented various ideas and concepts which I feel help to augment the discussion presented in the “Startups and Development Methodologies” by s0969755 [1]. By considering the purpose of a start-up organisation, I have examined the specific requirements for a start-up which differentiates it from an ordinary organisation; a technical start-up must equip itself with the tools to rapidly experiment, iterate and learn. I have explored the strategies by which a start-up can make the most effective use of its limited resources, and in doing so disagree with the author of the original article on the specific contention that it is necessary to discard Agile best practices such as the 40-hour working week. Instead, by adopting a methodology such as Lean [2] which advocates a ruthless and disciplined focus on reducing risk by rapid iteration, and by best exploiting available tools and existing solutions, a start-up can optimise the use of its constrained resources.

# References

[1] s0969755. “Startups and Development Methodologies”, SAPM Blog. URL: http://blog.inf.ed.ac.uk/sapm/2014/02/14/startups-and-development-methodologies/ (Last checked 22/2/2014)

[2] Ries, Eric. The lean startup: How today’s entrepreneurs use continuous innovation to create radically successful businesses. Random House LLC, 2011.

[3] Blank, Steve. “Search versus execute.” URL: http://steveblank.com/2012/03/05/search-versus-execute/ (Last checked 22/2/2014)

[4] Fowler, Martin. “Refactoring: improving the design of existing code.” Addison-Wesley Professional, 1999.

[5] Venners, Bill. “Twitter on Scala.” URL: https://www.artima.com/scalazine/articles/twitter_on_scala.html (Last cheched 22/2/2014)

Agile is Better for Knowledge Management in Large Software Development

Software engineering is a knowledge-intensive activity.  Software companies are more likely to beat their competitors by creating novel ideas than by simply using a lot of capital to achieve economies of scale. Because of that, these firms depend proportionally more on intangible assets, such as skills of their workers, rather than tangible assets like offices or machinery. Software companies achieve competitive advantage by leveraging their unique resources, one of the most important being their intellectual capital. Those that know more, for instance have more experience in using software design patterns to solve performance issues, are able to create products of better quality and better return on investment. As a result, knowledge management has been an important consideration for large businesses and now 80% of the large corporations have knowledge management initiatives[4]. In principle it may seem obvious that the more experience the company has the lower the likelihood of developers getting stuck and repeating past errors, but in reality it is not that simple for a large software firm to successfully accumulate its experience and knowledge and achieve sustainable organisational learning.

Why do we need knowledge management?

Advancements in technology and demand for software require software companies to improve their productivity proportionally more to the increase in their resources.  Companies would preferably want their performance to keep improving from project to project. They would want their workers to increment their own experience bank by adding new information that was gathered during the course of their last development and use it to tackle new tasks more easily. In current volatile environment, developers come and go and their knowledge comes and goes with them. When new employees are hired, they are not able to use the experience of the people they are replacing. The knowledge of previous workers used to solve bugs or make design decisions is not accessible. If faced with similar problems to the ones solved by the company in the past, new developers may not know exactly how to approach them. With every worker change there is therefore a knowledge drain and organisational memory loss due to the lack of perfect transition of knowledge from one employee to another. Knowledge management is a currently topical discipline, which aims at diminishing this issue. This practice encourages mechanisms for developers to utilise previous engineers’ work, whether it is code or information stored in a knowledge repository on how to approach certain problems, which leads to less rework and faster development progress. Implementing a knowledge management system involves introducing not only new technology, but also organisational change such as culture or human resources adjustments.  For instance, because the knowledge repository may not benefit the workers now, but few years from now, they may see contributing to knowledge pool as a burden, as it would not directly benefit them and their current situation or performance. They may also feel that technology used today will be obsolete in the future and see little sense in trying to describe learning from using it, as future workers may not even need this information. This calls for  for instance cultural change inside the business whereby leadership of the organisation strongly supports and encourages workers to contribute to the knowledge management initiatives. As much as 50-60% of new knowledge management initiatives still fail, because technological change is not introduced parallel with the process change[4].

How can knowledge be shared?

Knowledge carried by workers can be divided into two types: explicit and tacit. The former refers to the easy to articulate information that can be represented as documents, graphs or tables. This could for instance be a document displaying software duration estimation model developed by previous employees and explaining the reasoning behind it.  The latter covers the intangible, difficult to pin down know-how of a person – their gut feeling, experiences, perceptions or attitudes.  Examples of that could be how to deal with a specific customer or a specific design issue. To make the best use of the knowledge that is being created in the business, it would be in the company’s interest to establish mechanisms that would allow for a knowledge flow between team members. For instance, a company could set up an information system used as a repository for explicit knowledge – an organised catalogue of all the evaluations of previous projects stored in files accessible by workers at any point. Through these repositories developers could find out how to approach a new project based on what worked or did not work in the previous one. They would internalise this explicit knowledge and turn it into a tacit knowledge by increasing their know-how. At the end of their project, they could evaluate their work, for instance by doing a post-mortem analysis, and sustain the acquired knowledge inside the business by publishing the evaluation document into the repository for future colleagues to use and learn from it. Knowledge management systems could also grow larger and span not only project evaluations, but all the files used in the project, which could make them either potential knowledge gold mines filled with interesting past learning or unusable, obsolete files that work against the purpose of achieving competitive advantage through improved decision making.

Would experience database scenario be viable in real life? In many cases it is not.

Knowledge management in traditional software development methodologies

The software methodologies are divided into traditional, which follow the waterfall model-like approach, and agile. Traditional approaches divide the development into a series of independent steps. They start with eliciting requirements for the product to be developed, they analyse them, negotiate them with the stakeholders, clarify them, develop them, test them and deliver the finished product to the customer. The development team in traditional approach is divided into sections, each of which has specialised capabilities. These work on their own responsibilities and do not participate directly in the work of the other group. For instance, when doing requirements engineering at the beginning of the process, the team allocated to this step would construct documents listing all the use cases that need to be captured by the system and then pass it to the development team which will translate these into code. The requirements engineers do not step into coding and vice versa, programmers do not interfere with the product features elicitation. Development that uses traditional software methodologies is a long, as it allocates a lot of effort to pre-production even before any coding starts. Structured approaches allow for more space to analyse each of the steps to be able to make educated decisions on how to best approach them, as later on these decisions will be difficult to amend, as the development process cannot easily iterate back to previous steps. They allow for time to retrieve previously generated knowledge from the repositories and use it to improve current project’s decision making and therefore performance. They also are able to introduce structured and systematic methods of creating knowledge artefacts and sharing them with the organisation. Traditional methodologies may be more suitable for large software development. Large products are complex and one design decision may have a proportionally more impact on future progress of the application than if the system was smaller in scale. The time spent by workers on analysing previous knowledge and the expenditure of the firm on developing and maintaining knowledge repositories may be more justifiable, as it may result in better decision-making and less errors that may have a negative impact on the return on investment in the product. The process if not managed well can become inefficient. If all the employees, let’s say two hundred, are required to explicitly state what they learn and contribute that knowledge to the repository, it may create a database filled with irrelevant information, which is difficult to browse through and use to aid future decision making. The quality of the repository stops the knowledge from being retrieved and therefore stops the workers from contributing knowledge to it as they may feel that it simply does not make sense to spend time in an unusable information system.

Examples of knowledge management in the industry

Large software development company Infosys has for many years pushed an approach to store all the knowledge gained by employees in an organised and monitored for quality repository. Initially after an introduction of the knowledge management information system the company did not see much effort being put from the employees to share their experience and evaluate the work they do in the business. In its first year the information system saw only 5% of developers contributing to the knowledge pool and even less using it to retrieve information[2]. Infosys realised that voluntary contributions to the knowledge management system will not work and that cultural change and incentives mechanism are necessary for the knowledge pool to grow. They have introduced a gamified system whereby developers could gain virtual currency for sharing their experiences, which could then be exchanged for real life products or money. This attracted workers to contribute with 20% of them sharing information[2]. Unsurprisingly however, after several months the company realised that the knowledge inside its database was of low quality and revised the incentive scheme, which had a negative impact on the amount of knowledge shared. Infosys nowadays still runs a common knowledge repository and states that it faces biggest challenges with extracting knowledge from software teams working together across many locations.

DaimlerChrysler involved in developing software for its car electronics, established a unit inside their organisation called Software Experience Center. The SEC was aimed at investigating how the process of software development can best capture knowledge that it creates and codify it for future use and learning. It used the notion of “experience factory” whereby a group of people is working inside the organisation and is specifically assigned to project analysis and evaluation to identify aspects that may improve future projects. This is then shared in a common knowledge repository, which ensures quality of the content inside it. DaimlerChrysler said that this approach is viable, however only when managed properly. They stated organisational and human resource challenges as the biggest obstacles to successful knowledge management. For instance, they said that knowledge leaders tend to overestimate the motivation of workers to contribute to the knowledge initiatives[1].

Both of the companies are large and have established methods that turn experiences of workers into explicit knowledge available for the rest of the organisation. Their strict processes allow for standardisation of what knowledge and of what quality is put into repositories, to contribute to better knowledge retrieval in the future. Today’s business world constraints however require firms to develop products rapidly and efficiently, as time and budget are one of the most prominent customers’ expectations. They also put pressure on development firms to allow for changes to be made throughout their software projects as their product requirements may have changed since the inception of the development. Often, instead of traditional methodologies, companies therefore use agile methodologies such as extreme programming to accommodate for the dynamic software environment and reduce the risk of developing a product that will not satisfy its stakeholders.

Is agile the answer?

Agile methodologies create organisational cultures that encourage cooperation and learning from each other. They have interpersonal communication at their core, which is reflected as “Individuals and interactions over processes and tools” in agile manifesto. Companies using agile design their work practices to include group meetings and constant information sharing between developers.  These practices help with transferring tacit knowledge – the difficult to articulate attitudes and perceptions of workers. These can be shared using methods such as pair programming. Using this development practice, a worker who has been exposed in the past to a particular design issue can instruct his partner on how to best approach it and solve it in current context. Also, through pair rotation the entire team will share similar knowledge, as partners will transfer experience between each other each time they rotate. This increases organisational learning and sustains the knowledge in the business as now if one of the workers would leave the company, the information would still be with the other workers. Agile methodologies diminish the barrier of work processes being separated from knowledge management processes, as knowledge management happens simultaneously with development in agile.

Transferring tacit knowledge between team members may prove beneficial, however to truly sustain all the knowledge in the business, the company would want to transform the tacit knowledge of workers into explicit knowledge available to all the employees in the organisation (for example by publishing documents into previously mentioned repositories). Because of their intensity, agile methodologies however do not leave much space for sustainable knowledge creation and retrieval.  Coding starts from day one, as soon as some initial meetings take place, without much space for analysing previous work to find out patterns or approaches that could be used in the current project. They are aimed at lowering the product’s time-to-market and do not leave much space for workers to articulate their learning throughout their work. Also, after the project is finished and developers have the time for evaluation, the context-specific nature of the software they developed does not allow them to properly share ideas and information for future use, as they would either have to convert these into abstract format applicable in many contexts, which would take a lot of their time, or share it as it is, which may not be applicable in future projects.

Agile methodologies share knowledge well inside small teams located in the common workspace. They allow for weaker knowledge transfer in distributed software development for instance when one company has few agile teams working in separate offices distributed globally. Because of the lack of common repository for knowledge, workers are only able to transfer both explicit and tacit knowledge between workers in their workspace and only explicit knowledge for instance in the form of version control with workers in distributed places.

Conclusion

In my opinion, knowledge management appears promising only on the surface. In theory it can help the company achieve unique competitive advantage over their rivals and utilise intellectual capital resource of the business. However as displayed above, there is no approach that is ideal for facilitating knowledge sharing in the organisation. Traditional approach to software development may seem better in case of large scale projects as it incorporates space for evaluation, formal decision making or sharing explicit knowledge, which sustainably improves the organisational knowledge pool. The bureaucratic fashion of knowledge sharing in these methodologies may cover all the knowledge created by workers in the business, but that may not necessarily reflect itself in improved performance if knowledge repositories are badly managed, never used or obsolete. Because of many challenges to optimal knowledge sharing, leaders of companies have to therefore not necessarily optimise knowledge sharing, but “satisfice” – satisfy some part of it by sacrificing another. Agile methodologies work better in transferring tacit knowledge between workers, which arguably may contribute to the firm’s competitive advantage more than explicit knowledge, as tacit knowledge directly improves the skills and know-how of developers. Agile may be an answer to knowledge management of large projects, as it satisfies tacit knowledge transfer pretty well, while sacrificing risky bureaucratic practices, which output may not result in competitive advantage in the future.

References

[1] Schneider, K., von Hunnius, J.-P., Basili, V.R. 2002. “Experience in Implementing a Learning Software Organization”, IEEE Software, 19(3), pp. 46-49.

[2] Kimble, C. 2013. “What Cost Knowledge Management? The Example of Infosys”, Global Business and Organizational Excellence, 32(3), pp. 6-14.

[3] Schneider, K. 2002. “What to Expect From Software Experience Exploitation”, Journal of Universal Computer Science, 8(6), pp. 570-580.

[4] Rus, I., Lindvall, M. 2002. “Knowledge Management in Software Engineering”, IEEE Software, 19(3), pp. 26-38.

[5] Levy, M., Hazzan, O. 2009. “Knowledge management in practice: The case of agile software development”, ICSE Workshop on Cooperative and Human Aspects on Software Engineering, 2009. CHASE ’09. pp. 60-65.

[6] Dorairaj, S., Noble, J., Malik, P. 2012. “Knowledge Management in Distributed Agile Software Development”, Agile Conference (AGILE) 13-17 Aug. 2012, pp. 64 – 73.

[7] Desouza, K.C. 2003. “Barriers to Effective Use of Knowledge Management Systems in Software Engineering”, Communications of the ACM, 46(1), pp. 99-101.

[8] Bjørnson, F.B., Dingsøyr, T.“A Survey of Perceptions on Knowledge Management Schools in Agile and Traditional Software Development Environments”, Agile Processes in Software Engineering and Extreme Programming Lecture Notes in Business Information Processing, Vol. 31, pp 94-103.

Response to article: ‘Developers and testers‘ by s1355673

Introduction

This is the response article of ‘Developers and testers: friends or foes? Using fuzzy set theory to decide on the correct ratio of developers to testers in agile teams [1]by GAN. Firstly, this article will briefly reveal the benefits and the limitations of the original article based on personal opinion. Secondly, it will compare the importance between programmers and test engineers. Thirdly, the required critical skills of test engineers will be elucidated. Finally, this article will justify the basic rules of the software testing progress.

 

Strong points and limitations of the original article

Accordingly, both developers and testers play an indispensable role in the large-scale and long-term software development. There is no doubt that fight and cooperation never stop between developers and testers in the real world. In particular, test engineers can help the programmers to find the risks or bugs, which hide behind the seemingly perfect code. Moreover, they can also prove whether the project is in the right direction or not.

However, when some serious issues appear, both the developers and the testers will shirk the responsibility. For example, if an error is found during the debugging progress, the programmers might argue that the testing is incorrect and they always say: ‘The program works well on my computer’. On the other hand, the test engineers will believe the problem lies in the code rather than in their testing process. Therefore, as GAN said, an outstanding management of developers and testers will impel the weaknesses become strengths.

In addition, one focus of the original article is ‘how to decide on the correct ratio of developers to testers on an agile team’. Unfortunately, the calculation of this ratio value is extremely difficult. Because both the company size, the complexity of the developed software and the length of the project duration will affect the testing progress. In fact, the professional capability level of testing group is much more essential than the number of testing members [2]. The following section will describe the main testing skills.

Finally, GAN said that testers and developers will become one role in the future. This opinion is quite special, but it is absolutely correct. Every developer will easily have the testing skills owing to their familiarity. Furthermore, to hire more testers will highly increase the project budget.

 

Programmers v.s Test Engineers

In the real world, testing is a low-level work in software development and most developers are used to do the testing work before. As a matter of fact, if the salary is unchanged, none programmer will want to change their job to testing. For testers, some of them will agree to join in coding group, but others will not due to their terrible programming skills. Accordingly, most developers look down testers.

In the ideal situation, test engineers also have the outstanding programming skills and their work is comfortable and effortless. However, most of them cannot coding and the testing progress is extremely boring and vague. In particular, testing does not have the clear written requirements. If the testers have some confusing understanding, they have to ask the project manager or a related programmer. Nevertheless, the answers they obtain always are: ‘You do not need to discern this!’, ‘You do not need to understand that!’, ‘You only need to test it by this way!’.

Undoubtedly, this situation will cause the testing job to become useless. Moreover, the project manager and programmers have to handle and complete a lot of testing work. Such as the managers have to check whether the project is in the correct direction or not. Moreover, they need to teach the testers how to plan a useful test case, which will meet the requirements and the technical characteristics. Hence, in order to reduce the testing cost and increasing the testing efficiency, some developers will be involved in the testing progress. In fact, testing plays a pushing role in software development, it not only can rectify the direction and the defect, but also can reduce the rework risks. That is why the testers should have the following required and professional skills.

 

The required skills of test engineers.

Teters Skills

Figure 1 [3] indicates the skills that testers should have. Actually, most testers only understand the ‘test theory and method’ and realize some of the ‘business knowledge’. For other skills, their cognitive might almost be zero. That is why testers have to accept the discrimination of programmers and the company’s disregard. However, the skills that are listed in this figure might be too much. If a test engineer has all these skills, he tends to leave the testing job. Thus, the following rules can appear.

 

The basic rules of the software testing progress

Mainly, there are seven rules that might improve the testing efficiency and effect.

  • Rule 1: The real difficulty of testing.

The complicated of software testing is not less than developing. Some testers might insist that the testing progress is to click their mouse on the completed user interface. For the highest testing, they merely need to comprehend some performance testing tools, such as LoadRunner. In fact, according to the above figure, testers at least satisfy ‘Essential skills’ and ‘Advanced skills’. If they do not have these skills, the testing cannot be smoothly performed.

  • Rule 2: Rejected the second-hand requirements.

‘Second-hand requirements’ are the requirements that testers identify them from their manager, but not from the customers. It will reduce the accuracy rate and expand the tricky level of testing. The way of directly obtained first-hand demand is that all testers should be involved in the requirements analyzing.

  • Rule 3: Rejected be involved in the project in midway.

Numerous projects will arrange the testing members at the late stage, and these testers might be responsible for another project testing at the same time. Undoubtedly, testers must join in the whole project life cycle, namely, each project should at least have a ‘complete’ test engineer.

  • Rule 4: Testing decides whether the defect exists or not.

During the testing progress, if a tester finds a defect that is not recognized by developers, the following situations might occur.

  1. Owing to the poor experience and limited knowledge, the tester will be convinced by the programmer and agree that the project is completely perfect.
  2. The project manager will deliver the final decision. However, most managers used to be the developers before, so finally the defect might not be recognized.
  3. Customers will give the final decision.
  4. According to the majority rule. Testing members are certainly in the minority part of the group, so that the defect will not be accepted.

In fact, the testers should make this decision. Because their opinions are closer to customers and they require the responsibility and power of making a correct judgment, which can enhance their working enthusiasm. Moreover, hiding defect is equal to a time bomb is buried in the project.

  • Rule 5: Daily testing

Every tester should install the software development tool on their own computers. They need to acquire the developed code every day and process the compilation. Namely, this is the front work of the final testing. It will avoid the terrible situation that all the dilemmas appear just before the project delivery [4].

  • Rule 6: Do not need to write the exhaustive test cases.

The common and simple work does not need the documentation. Only the complex, error-prone and intricate contents require the response test cases. Furthermore, the job of test case description is to point out the key testing process rather than including every step in details. When the condition is mature, the company may gradually establish a test case library, which allows the testers to learn and use.

  • Rule 7: The available resources between testing and developing.

If both the programmers and the testers can understand the work of each other, the project cost might reduce and the development progress will become more efficient [5]. Currently, most developers do not have the testing conscious. And a great number of test engineers are not familiar with the software development. Hence, the project manager should train the coding capacity of some outstanding testers and let the programmers to respond some testing jobs of the project. In addition, the basic principle is that the programmers cannot test the modules which are developed by themselves.

 

Conclusion

In summary, testing is not a low-technical work, in contrary, the importance and difficulty of testing are no less than software design. It is one of the most critical parts of project development, which can ensure the correct direction. In addition, testing cannot be carried out at the late stage of the project. Therefore, all the testers should improve their ability of the above testing skills. Moreover, the project managers should track the project based on the testing principle and provide the improved opportunities and the decision-making power to the test engineers. Finally, the programmers need to humbly and carefully check their work. Any potential defects should be taken seriously.

 

Reference

[1] GAN (2014), Developers and testers: friends or foes? Using fuzzy set theory to decide on the correct ratio of developers to testers in agile teams.
[2] Whelan, D. (2009), Agile Testing: The Role Of The Agile Tester.
[3] Tutorialspoint.com, “Software Testing Tutorial“, Simply Easy Learning
[4] Bach, J. (1999), “Risks and Requirements Based Testing“, IEEE Computer Society, pp. 113-114.
[5] Kaner, C., Hendrickson, E. & Brock, J.S. (2001), “MANAGING THE PROPORTION OF TESTERS TO (OTHER) DEVELOPERS“, Northwest Software Quality