Response to “Version Control: Important for Individual Projects?”

This article is a response to “Version Control: Important for Individual Projects?”
The aforementioned post suggests that using VCS is imperative when working in a team but optional, albeit helpful, when working alone.

Upon reading the original article, I found myself agreeing with the majority of points. Yet, I felt that the author did not quite do the positive case justice.

It is for that reason that I have written this response to expand on some overlooked points and offer my own insight.

The summary of the linked article starts by suggesting:

“[Version Control] is used to track and record every aspect of the development of software including who has made changes, why they have done so and to what the changes and additions refer to.”

I feel that this statement is misleading as it seems to conflate the role of raw VCS and providers, such as GitHub, that offer extra services.

Tracking the ‘why’ behind software development is often better suited to a project management tool such as Basecamp [1] or Trello [2]. Git simply stores a series of file updates (nodes) and an ordering (edges) in a graph. The stages of development when producing a conceptual solution do not necessarily map to a linear progression, so VCS alone may not provide an adequate record.
In contrast, GitHub [3] provides excellent Issue-Tracking and Pull-Request features that aid in software development.

The summary continues to offer:

“[a VCS’] working copy is the most up-to date version of the software or the ‘head’ but previous history and iterations are saved and can be viewed as needed.”

This describes only a limited usage of a VCS, like Git,’s full potential. When using an approach such as the Feature Branch Workflow[4], there is rarely a most-up-to-date branch. Instead, many development branches provide a partial ordering of changes. Disjoint features are advanced concurrently. There may be a release branch that is most-up-to-date of all integrations, but this would not include changes present in un-merged feature branches.

Later, in the discussion, the author mentions:

“Merging your edits with someone else’s can be very difficult or they can break the code completely, unfortunately there is no way around this.”

This suggests a scenario where the Feature Branch Workflow may come in handy.

Git’s merge features can usually handle changes within the same file admirably. This is especially true when options such as “patience merging”[5] are enabled.
Any remaining merge conflicts may stem from too many people changing the same function or submodule within a file. This is often a bad sign as it is unlikely that a single component is critical enough to need many programmers constructing it in isolation. It is far more likely that it is a case of “too many cooks” combined with a poor separation of concerns.
Instead, the module that is requiring continual adjustment by many can be split into separate components.

Components can be developed on disjoint Feature Branches and merged into an integration branch when completed. By fixing the interfaces of each component, a single component can interact with one from another branch. This lessens the need for continuous merging of partially-completed implementations.

The original article claims:

“[Using VCS] ensures that everyone is consistently up to date with the latest version of the software. These advantages do not apply for an individual developer.”

I disagree with the suggestion that a sole developer cannot benefit from distributed versioning.
During the course of my Master’s project I developed low-level software concurrently on a laptop and PC. The majority of system code was shared, but features and optimisations were developed independently for a particular device.

Using VCS, shared code was pushed into the release branch and merged into both feature branches. This task would have been much more challenging without the ability to juggle dependencies via branching.

Another quote from the original article that I would like to draw attention to is as follows:

“breaking the code [is] no longer such a hassle since you can always revert back to older versions where all the information is present and works.”

I agree that a major benefit of VCS is the ability to rewind in worst-case scenarios. However, I feel uneasy at the idea of ‘rewinding’ being advertised as a killer feature, as opposed to the safety net it provides.

Prior to development branches being merged in, integration tests should be run to ensure that the existing system will not be broken by code addition.

In my opinion, using the ‘rollback’ feature of version control should be avoided in favour of eradicating sloppy development practices.

The author sensibly states:

“Keeping track of who did what can also be quite important when debugging/trying understand what is going on or when creating documentation”

…but then argues that this is not significant when working alone.

I have anecdotal findings to the contrary. I have found the ability to “review history” incredibly useful, both when working in teams and alone. The ability to view a diff over a custom period and see the previous direction of progression has proven invaluable after taking some time away from development. The information it provides eases the act of “getting back into the swing of things”.


“If a third party controls your repository, this can also double up as a back up of your project”

This statement is entirely true. However, having your development repository solely in the command of a centralised third-party is generally a Bad Idea™.

There have been many cases of service provides, such as GitHub, suffering from DDoS attacks and technical failures. [6] When this happens, you could be left unable to commit work and progress can grind to a halt.
Instead, work can be pushed to personal, distributed repositories in addition to a single service provider.

In the article’s conclusion, the author stated:

“Version control is not completely necessary for the individual programmer since a lot of the reasons why this tool is so useful do not apply in this circumstance, and those that do are not essential”

I disagree that individual programmers should go it alone without Git as a sidekick.

Yes, Git is definitely poorly designed for beginner usage. For example, you may wrongly expect “$ git branch” to allow you to change between branches instead of the “checkout” command. This contributes to the unwillingness that many new users have towards learning Git.

Yet, the fact that it is so critical for large teams means that it is far more useful to use it in your own time so that you can become accustomed to it. There are plenty of guides available and even some great in-depth learning resources about how Git works [7].

(PS: I’d really recommend reading that last link if you are interested in painlessly discovering how Git really works)

Using Git for personal files in general allows you to have an audit trail, something that you rarely appreciate fully until you need it.
The author stated that they have not yet been saved by VCS on a personal project. I believe that this is akin to stating “Seat-belts aren’t essential as I have never crashed!”

For example, by putting all of my configuration (dot)files into a Git repository, I can easily check which esoteric hack fixed a particular obscure bug.
In addition, you can leverage the power of automation to pinpoint the exact moment that you introduced a flaw into any system — often saving lots of time. [8]

In conclusion, I believe that Git is a valuable tool that is well worth the time investment to become familiar with. I think that although it is technically non-essential, it is a false-economy to not use Git whenever possible. It takes so little time to set up that the benefits largely outweigh the costs.










Response to article: “Version Control: Important for Individual Projects?”

This is a response to the article “Version Control: Important for Individual Projects?”


Version control is one of the most important tools used in software development process. The original article considers importance of revision control systems in projects, discussing which advantages of version control systems can be useful specifically in individual project with the same success as in team projects.
In this article, I will provide my vision to this problem, discussing ideas, presented by the author in original article and arguing points with which I agree or disagree, as well as will provide some additional information about useful VCS and related tools, basing on my own experience.

Advantages of VCS

I am strongly agree with advantages of version control systems, described by author in article. For instance, tracking of source code’s changes and their parameters (commit messages  time and information about user, who changed the code) as well as ability to revert changes – these are one of the undisputable advantages of any version control system tools for all type of projects and it explained and represented well in article. The possibility of sharing code, also mentioned by author as advantage for team projects, and I completely agree with their opinion. However, these characteristics is relate to repositories, but not to VCS, which is often integrated into different online repository systems.

Team and project sizes

I cannot be completely agree with the idea of consideration of necessity of usage of version control systems in projects in terms of the number of people in team, because VCS is system, based on idea of tracking the changes, made in documents (not necessarily source code documents), which means that it does more depend on the size of project than the number of members in project’s team.

As a person, who always work on his own projects, and have knowledge about different software development technologies, languages and tools, I always prefer to create all parts of my projects (maybe, instead of design) by myself. Therefore, my projects almost always large and I know a lot of other programmers with experience and interest to create complicated applications, who are also work on large projects individually. Therefore, I can state that there the size of team depends not only size of project. As good example, supporting this statement we can notice big projects, made by small teams or individuals. For instance, such big companies, like Whatsapp (which is very famous these days) with more than 500 000 000 customers base, consist of only 55 employees or Mojang AB company, which is famous for creation of popular game Minecraft with more than 40 000 000 customer base, has only 39 employees.

Another argument, which I write also based on my development experience , that one has never known at initial stage, how large will be their project, because there are a lot of large-scaled projects, which grown from small initial projects, made “just for fun”. Therefore, I am strongly convinced, that one need to start individual project with right set of scalable tools, which can stay actual and reliable, even if project starts to grow.

Disadvantages of VCS

Furthermore, in original article, author mentioned such problems connected with usage of version control system in projects as:

  •  when there are many people working on the same part of the code who want to commit both their changes with many conflicts in them and how to use the tool and setting up the system for version control.
  • learning the commands and functions to use the system can be hard.

I think this kind of problem can be discussed only in terms of specific VCS application or tool, because problems, mentioned by author are not general problems of VCS.

For example, branching and merging systems in distributed revision control systems can easily overcome first problem and there are large amount of easy-to-install applications, tools and plugins for IDE, which have GUI interface for VCS and can eliminate the need of usage of VCS commands, which are also not so much.

VCS tools

I support the idea of author to research the most useful VCS solutions for individual projects. Therefore, in order to make contribution to this research, I provide in my article some information about tools, used by me currently in my individual projects . As main VCS in all my projects, I use Git, because I prefer distributed revision control systems. As the main VCS client, I use Github client (free for download – I use Bitbucket and Github, which provide different types of repositories (public, private) within the free pricing plans for individual developers (free pricing plans are important especially for individual developers). Furthermore, I use Intellij IDEA as main IDE for development, which consists of good set of tools for VCS integrated into IDE, as well as free plugins, which provides wider function and support works with repository systems, mentioned above.


Summarizing all above mentioned, I want to point out, that I completely agree with author’s statements about advantages of revision control systems and that it is highly recommended to use.

However, I think that it is inefficient to generalize information about usage of VCS in the project, based only on the information about that whether this project is individual or the team project.

Response Article: Pair Programming is OVER-RATED

This is a response article to the blog post:
“Why Pair Programming Is The Best Development Practice?”
You can find it here:

Why Pair Programming Is The Best Development Practice?


In this blog post I am responding to an article claiming that Pair programming is the best development practice! Even though I believe pair programming is a great practice in some conditions, I think it is totally over-rated by the author of that article. I would comment on two things, the way the author delivered the idea and the idea itself.

1. “Hey! I am awesome”
I totally believe “Pair programming” is a great activity, but you cannot come to people and say: “Hey! This is the best thing in the world”. The article title assumes that pair programming is the best development practice!
In that blog post’s conclusion, the author stated that: “Pair Programming is the best development practice, because it is fun!”. Well! I enjoy riding a rollercoaster and lots of people do, but this doesn’t make it the best leisure activity! And if I like pizza it doesn’t make it the best food in the world! YES! I totally agree Pairing is fun, but this is not, in anyhow, a reason to make it the best development practice.
Were I the one convincing people that pair programming is a great activity I wouldn’t use this claim! There is a bunch of reasons that could be used like:
– “Good fulltime pair programmers consistently produce higher-quality code faster..” – Jim “Big Tiger” Remsik
– “In pairs, progress is faster, we can work longer without losing headway, and quality is higher” –Ron Jeffries

2. I agree…Pair Programming is a great activity!

Pair programming is a practice where two programmers work side by side on the same computer together collaborating on design, algorithm and code.
I believe that it is a useful activity to follow when you are dealing with difficult tasks, having people share the same level of skills and experience. I also agree with the five points mentioned in that blog post for successful pairing.

3. But……

3.1. Pair Programming is NOT suitable for all tasks and situations

Some people prefer to work alone, don’t like to socialize, or maybe are of higher quality than their mates. Some people claim that pair programming save the time by letting two people work together in the same project. However, in many cases this time could be easily wasted(communication, talking about ideas…etc). Beside some other disadvantages we discussed in the SAPM lecture about “Methodologies “[1] such as separated geographically programmers. Some people can’t program while someone is watching them all the time! Just think about arriving to the company and then waiting for your mate! You can’t do anything until he comes! What if he is sick!

3.2. Specific statements I don’t agree with…

I will just quote some sentences from that blog post and comment briefly on them.

  • “Guess what – your most programming tasks in a software company will focus on boring and repeatable tasks with very few of them being creative and challenging.”
    What? Really? Who said that?! Most of tasks are boring and repeatable?
  • “You probably think about programming as a creative process – sadly this is not relevant for every task.”
    I believe that all coding activities are creative, even the most silly ones! You can turn them into creative tasks.
  • “ Cowboys are of no value anymore.”
    Well I don’t think Cowboys coding software development is good but I won’t say it is of No value!
  • “I believe that those stories exist, because people have no idea how to practice pair programming.”
    I also believe that “SOME” of them have no idea how to do it properly but there are lots of blog posts about people who tried it correctly but it was useless and just a waste of time.
  • “Pair programming is an important and crucial part of Extreme Programming (XP) methodology. This being said, we cannot use pairing without following rules from XP.”
    Yes! It is really an important part of it but it doesn’t mean we can’t use it alone. One advantage of agile methods is that you are not obliged to do all activities[4].
  • “Is there an ultimate answer? Yes! There is an ultimate answer to the question “Why Pair Programming Is The Best Development Practice?”.
    if the answer is ULTIMATE then everybody will follow pairing! There is no ultimate answer even for the most common problems.

4. Trade-off

In my opinion, you can’t use pair programming in all cases and situations. You have to find what fits you and in which cases you want to adopt it. Some reports[2] suggest using it when mentoring new hires, for risky tasks, or when dealing with a new technology. However, programmers could have a say about this. Code reviews might be another solution for people that don’t feel ok with someone watching all the time. John Sextro worked in pairs with more than 100 developer and he said[3]: “Pair programming can be tough, even for the best developers, and can be downright daunting if you haven’t done it before, are introverted or unsure of yourself.” And he suggested 7 habits for high-efficient pair programming which are: Proficiency, Communication, Self Confidence, Self Control, Patience, Manners, and Hygiene. [3]


To conclude, pair programming is a useful and great activity but we can’t claim that it is “the best development practice”. Pairs will learn from each other, help in finding bugs, and the quality of design, algorithms and code are higher but it requires pairs to have similar skills and good respectful relationship between each other. I think the author of the core blog post overrated pair programming by considering it the best development practice.



3- from the video:

Response Article: Agile Software Development in China

This is a response article to “Agile Software Development in China” [1] by s1314857.


In the original article the author talks about the benefits of using agile software development (ASD) in China. The article is divided into three main parts. The first provides an overview of ASD methods; the second explains some of their main advantages and the third which is the main idea of the article is about their application in China. Despite the fact that the author has made some salient points within his/her article about the benefits of using ASD methods in China I remain unconvinced that it would provide the improvement of quality and efficiency of software development as the author states. [1]

I do not dispute the fact that ASD has many advantages in certain situations but in fact what I am trying to say is that the author has looked at the problem too conveniently by considering it only from the positive side of adapting it in China and has overlooked the disadvantages that might arise. In my opinion when proposing a method(s) both the good and the bad outcomes of it (them) should be taken into account and if not then the proposal is unrealistic and too good to be true. Hence in my response article I will be discussing the author’s three main paragraphs and say their strengths and weaknesses according to my knowledge.



What is Agile Software Development?

In this paragraph the author describes the main ideas behind the agile software development methods in general. He/she mentions that these methods design is based on changing requirements and development adaptation. But the author also says that such methods are more applicable for small-scale teams within this description so in his later statements when considering their implementations in large-scale systems he/she should discuss that such methods are at a disadvantage as opposed to a more plan driven approach or non-agile methods that are designed for larger teams. [5] He fails to mention that agile methodology has less documentation as opposed to heavyweight ones and in some situation this would be crucial.

The next part of the authors’ explanation of ASD is actually taken out from [5] and is an ongoing debate, which he fails to mention and he only talks about the beneficial side of this debate. He/she does not talk about the criticism that the main values of ASD, that he cites in his article and the idea of the items on the left being more important that the items on the right. In [8] it is mentioned that this concept is being used by hackers to write code irresponsibly and is used as an excuse for not writing good documentation or following a plan. In my opinion this might give raise to many problems in large-scale systems, such as lack of documentation or the lack of following a plan might result in ASD being detrimental to the whole system. As in [8] hackers interpretation of “responding to change over following a plan” is roughly “Great! Now I have a reason to avoid planning and to just code up whatever comes next.”

Another good thing the author might have mentioned would have been to discuss or at least mention different agile methods and each one’s advantage or disadvantage in different situations. In my opinion that would have been beneficial if the reader was trying to understand how such methodologies would be applicable in different scenarios.

Why do we use Agile Software Development?

 In this paragraph the author discusses the advantages of ASD over the waterfall method. He/she gives salient observations about the situations in which an agile method is better to be implemented (The case of ever-changing requirements for example). So the idea of the paragraph – to convince us to use ASD – is justified with good examples. But nevertheless the author is talking about hypothetical situations. If we were to discuss another hypothetical situation of a critical system (software for a nuclear reactor, medical systems etc.) where all the requirements are strict and there is a need for good documentation then a heavy-weight approach would be more applicable.

The author also states five advantages of ASD [1]. All of them are well supported by the situations given. In the second advantage (Quality) the author discusses a specific agile method – extreme programming (XP) – but XP is aimed at software-only projects and large projects tend to be multidisciplinary so implementing it might be problematic [2]. The fifth advantage that the author mentions (efficient self-organized team) is more a demand than an advantage. It is true that such methodology enhances team communication but if none is apparent before its implementation or if the individuals lack suck skills then the methodology would not work. And these cases are not captured by the author.

Agile Software Development in China

In this section the author discusses how agile software development is present in China nowadays and how it has proven to be of more interest recently. He/she also states that such methodologies are still in an early adopter stage and the author gives his three main reasons for why this is so.

In the first reason the author talks about small-scale companies in China and how they do not implement standard development methods but instead the process depends on personal style of the leader. He states that this has no management cost and offers more freedom for team members but I do not see it this way. First I cannot see how a process depending on a team leader provides freedom for the team members since the leader might impose a method for everyone and in such case they have not got the freedom the author is talking about. Also if the manager is competent and makes perfect design decision then there would not be any management costs but if his/her decisions are not good then the overhead of such development method would be quite costly.

Anyway the author considers these things as advantages and mentions disadvantages such that the code produced in such environments is of an unstable quality. He states that it is thought to generate a cohesive and self-organized team so if that is the case in such companies wouldn’t that be a hurdle toward implementing an agile method since the author stated that that is one of the requirements in order for it to work. [1] The other points about the already existing chaotic methods and their disadvantages are otherwise good and in such small-scale companies ASD would provide higher efficiency and quality of product. That is if the software involves changing requirements and factors otherwise a heavyweight approach would be beneficial too and the author does not mention that explicitly.

The second reason for implementing ASD in China the author talks about resources being the bottleneck of development companies and suggests the usage of ASD in order utilize the existing resources properly. But in my opinion this is not a reason to choose to apply an agile methodology. If the author has stated that most of the software in such environments needs to be adaptable because of unstable requirements, then yes I would agree with the benefits of implementing agile development but since he/she did not specify the situation in such manner then why not consider heavyweight methodologies?

The third reason the author gives is that the principles of agile software development would be easily accepted in China because of their culture. Since I am not that familiar with the culture there I cannot disagree on that and I consider that to be true. Nevertheless there are different agile methodologies such as XP, Scrum, Crystal and each of them have different principles and with some differences between them so to which is the author referring to? He/she also states that since the most popular coding style in China is the cowboy one and this is his/her reason to why ASD would be easy to develop and implement since they are more similar than “other professional coding styles”. What the author fails to grasp is that ASD is not necessarily a coding style but it can be a management style (Scrum) or a hybrid. So this would have been a good point if the author said a specific agile methodology that is similar to “cowboy coding”. Not to mention the fact that it still appears that the author is talking about small-scale systems.



The author of the article has given good points for implementing agile software development in China but I as a reader remain unconvinced by his article alone. He/she does not consider the negative sides of implementing such methodology and overall does not appear to be discussing it for large-scale or critical projects (where agile methods are at a disadvantage). Since all methods are controversial [7] and none can be considered to be applicable in all situations the thing that the author lacks most is that he/she has not mentioned when such methods are good to be used and where their implementation would be detrimental. That in my opinion provides an unrealistic view of the benefits of implementation of agile software development and hence is a reason why I remain overall unconvinced by the author’s article [1].


[1] Agile Software Development in China. s1314857,  February 14, 2014.

[2] Extreme Programming from a CMM Perspective. M. C. Paulk. IEEE Software, November/December 2001.

[3] Manifesto for Agile Software Development. Various authors, IEEE Software, November/December 2001.

[4] Recovery, Redemption, and Extreme Programming. P. Schuh. IEEE Software, November/December 2001.

[5] Get ready for agile methods, with care. B. Boehm. IEEE Computer 35(1):64-69, 2002.

[6] Management Challenges to Implementing Agile Processes in Traditional Development Organizations. B. Boehm, R. Turner. IEEE Software September/October 2005.

[7] Software Development Methodologies Lecture

[8] S. Rakitin, “Manifesto Elicits Cynicism,” Computer, Dec. 2001, p. 4.

Response to “Agile Methodologies in Large-Scale Projects: A Recipe for Disaster”


This is a response to the article “Agile Methodologies in Large-Scale Projects: A Recipe for Disaster”. In this article the author uses analysis of the agile manifesto to provide supporting arguments for the claim that the use of agile development techniques in large-scale software development is “infeasible”. The article concludes that agile development techniques have no place in the development of large software. I aim to provide some counters to the arguments made against agile development and convince you all that it should always at least be considered.

Agile Development lacks professionalism……

The author suggests that a lack of documentation can be detrimental to the level of professionalism a project may appear to have. They call for the use of documentation as a means of providing an explanation for why a particular project may have failed. This point is validated using the failure of large scale government projects as an example.
Firstly, I would like to suggest that the use a government project as an example may be a little inappropriate. I feel that the levels of bureaucracy and public scrutiny involved in such projects makes them considerably different from private sector projects and as such they should be approached in a different manner (please see my previous article for more detail).

Furthermore, I would put to the author that a project which fails (due to running over budget or over time) but has extensive documentation could also appear unprofessional to clients. The failure of such a project raises the question “why spend so much time and money on extensive documentation for a system which doesn’t even work?” Therefore surely the practice of delivering working code over small iterations is more satisfying for everyone?

Don’t worry there will be documents

If agile development is used correctly then there should be excellent communication between everyone involved in the project, especially the development team and the client. So even in the event of a failure there will be a “hierarchy of responsibility” as all the people involved should know what is going on. Furthermore the deliverables that are produced by the short, fast iterations that agile caters for should be sufficient if any investigation into the failure of the project is required.

Unlike the author seems to suggest, agile development does not abandon all formal documentation. In fact documentation is still encouraged with agile methodologies it is just that development of documents is more likely to occur at the end of the development cycle in order to avoid wasting time and money creating documents which will no doubt need to be altered as the project goes forward[1]. Any documentation produced will be concise and simple yet detailed enough to include the vital information.

Flexibility is key

While I agree with the notion that for very large scale projects it is important to have a clear idea of the initial scope of the system. I reject the idea that this should come in the form of a static requirements document. In fact I would say that for a very large project it is vital that system specifications should be changeable. This is because it is simply not realistic to expect the customer to be aware of everything they require from a system at the very beginning of the development cycle. In addition to this large scale systems which are the subject of the author’s argument often take a number of years to develop and as such it is highly likely that the needs of the client will alter during the development process. It would be irresponsible for a development team to deliver a product which they know does not meet the needs of the client simply because these needs were not obvious from the beginning. I would say that such projects should be considered failures under the justification that they are not “fit for purpose”.

Additionally the author presents the idea that “each intricate detail” of a project should be carefully thought out before any actual implementation is done. I feel that this is highly unrealistic and would lead to a nightmarish development process consisting of never ending requirements capture and design phases. It is virtually impossible to create a perfect design of all aspects of the system, no matter what happens there will always be some parts of any initial design that require alteration (or even a complete re-think). This is especially true for large scale projects which could be made up of numerous components. What this means is that investing large amounts of time and money into meticulous planning before any implementation is an extremely inefficient practise as it is almost certain that such plans will change. In contrast agile development has design and implementation working side by side resulting in less time wasted on attempting to design the whole system at once.

I agree with the author’s observation that making changes in large scale projects is not a simple task and that any changes made will create a number of subsequent changes. However I do not feel that these added complications are reason enough to avoid making the changes. Such complications can be avoided if everyone involved in the project communicates with each other appropriately (one the key the concepts of agile)[2].

Similarly it is suggested that developing software using this static approach protects developers from being overworked by demanding clients who continuously request new features. This may be true in the sense that at the end of a project it will be easy to evaluate success or failure based on the original requirements. However I think it is important to remember that the software developers are providing a service which clients are paying for, therefore if a client wants a new feature or a change they should have the option. If you wish to make changes to an order you placed in a restaurant the waiter will not deny you as long as you are willing to cover any extra costs. Similarly, the responsibility for any substantial budget or timescale penalties due to changes falls directly on the client.


In conclusion, developing large scale software projects is an extremely difficult process. Such projects have a history of failure[3]. However this poor track record is certainly not a reason to abandon the use of agile development techniques. If anything it is actually a reason to avoid traditional techniques as much as possible as they have proven to be problematic. The criticisms of agile development made by the author of “Agile Methodologies in Large-Scale Projects: A Recipe for Disaster” are understandable however after some thought these concerns can be disregarded. Unlike agile development itself which may not be perfect but at least it makes a conscious effort to make a step towards a world in which project failure rates are significantly lower. For this reason alone it should at least be considered as a methodology for any software project large or small.



Enemy of the state! or why OOP is not suited to large scale software.


This article is written in response to Functional programming in large-scale project development, in which the author highlights some characteristics that make functional programming suitable and/or beneficial for large scale software development.

Whilst I do agree with most of the points made, I think they are not sufficient to convince people to change. Change requires effort, and that effort is justified by one thing: awareness that the status quo is flawed.
This means that one not only has to convince people that  the new approach is correct, but also the what they are currently doing is wrong.

Specifically, to enforce the transition from OOP to functional programming , the point to be made is:

The use of mutable state is dangerous, and should be used very carefully, only when it’s a strict necessity.

State: what is it?

The state of a program (or, better, of a process) refers to the values assumed by its variables at a certain time during the execution of the program itself.

Referential transparency

In functional programming, data structures are immutable by default, hence functions cannot modify the state of the program.
Functions can therefore achieve a property called referential transparency: since the state is immutable, it means that they will always have the same output when a given input is provided.

Of course this is not true for an object method: as it can modify the state of the program, we cannot guarantee that its return value won’t change accordingly.
In fact, it has to change accordingly, as the whole OOP paradigm is based on sending messages to objects as the only means to interact with their state.

OOP + Mutable State = Bugs

When referential transparency does not hold, one is required to know two things in order to predict the behaviour of a program: the input of the program, and state the program is in.
Why is this a problem? Well, the biggest selling point of OOP is that it can manage complexity via encapsulation, specifically by hiding the state of objects, and exposing their behaviour solely via the interface provided by their methods.
However one does need to know about the state of the program to reason about its execution, but the state is:

  • Hidden, so one cannot rely on the interface alone, but instead has to inspect the implementation .
  • Split across multiple objects, so one has to take into account all the objects that are interacting with each other at any given time.

Of course this can be managed for small projects, but it’s an approach that scales poorly: when there are thousand of objects involved, most of which were written and engineered by other people, reasoning about the code is a nightmare.

That means the predicting the program’s behaviour becomes infeasible, leading to:

  • More bugs, as one doesn’t know what the code is doing, is harder to ensure it’s doing it correctly.
  • Very (I mean very) difficult debugging, as the execution depends on the state, which could have been changed by any of the methods of any of the objects involved.

Referential transparency avoids most of this problems, and, as an aside, ensures true encapsulation, for the interface provides all the information needed to use a module ( or whatever is the preferred means of abstraction in the functional language of choice).

Parallelism made simple

Another issue with mutable state is consistency: if the correct result of an operation depends on the state, then changing it could lead to failure. When multiple threads are executing concurrently and non-deterministically, ensuring consistency manually through the use of locks becomes a really hard task.

On the other hand, if the state is immutable, then consistency is assured automatically. As the clock speeds are now decreasing to leave way to multicore architectures, there is no coming back: a paradigm that is not naturally oriented towards parallelism and concurrency is doomed for failure.

I am not gonna write further on this topic, as the author makes already a very good case for it in the original article.

Is mutable state evil?

Of course there are situations in which the use of mutable state is necessary, e.g. IO. Functional programming languages do allow the use of mutable state (via monads, impurity etc.)  but make sure that it is used only when is really necessary, by enforcing immutability by default.


One might argue that there’s never been a better time for functional programming than now.
As the author correctly states in the section OOP languages shifting towards functional paradigms, several languages are including features like lambdas, first class functions, and even list comprehensions in some cases.

However, I am convinced that this trend will only make incremental improvements to the development of large scale software, as (good) programming languages are not just a bunch of features, but rather a coherent vision of how to write software correctly, and the most important aspect of functional programming, which is the use of immutable state by default, is still being neglected.

In fact, we have seen as mutable state means lack of referential transparency, which, when combined with the OOP approach to encapsulation, makes the code harder to reason about, and therefore more bug-prone and harder to debug.

So in conclusion:

Program imperatively when needed, and functionally when possible.[1]


[1] Micheal O. Church, Functional programs rarely rot.

P.S  The pun in the title has been used in many articles about functional programming, even though I haven’t used them  as an inspiration. What I did use are all the Rich Hickey’s (creator of Clojure, one of my favourite languages) talks, I suggest you check them out at:

A Response to: Agile Methodologies in Large-Scale Projects: A Recipe for Disaster

The author of the article argues that valuing the notions on the left hand side more than those on the right hand side is infeasible in large scale software projects, and the traditional development methodologies are more appropriate in such scenarios.  I believe that in the context of large scale software projects, both notions are equally important, and sometimes the left notion could contribute towards the successful delivery of the software even more than the right one.

Working Software over Comprehensive Documentation

This agile value from the Agile Manifesto claims that delivering working software should be a priority over delivering well written documentation; however, the term “working software” has to be clarified. The agile methodology encourages delivering small pieces of working software at set intervals which might be infeasible for large scale projects. The author of the article places the focus on the importance of the documentation, so I will try to explain how the two notions balance and evaluate the importance of each of them.

The agile methodology does not encourage creating extensive detailed documentation over the development process; it is a waste of effort as it is prone to change because of the dynamics of the process. However, this does not infer that no documentation should be prepared. The author claims documentation by informal means might be “swallowed up or forgotten”, and maintenance of a project might be aggravated. I disagree, I believe that documentation done extensively with a lot of effort invested could be as helpful to developers as a shorter concise and informal version that touches upon key points. Furthermore, if I had to weight the left-hand side and right-hand side notions, I would extensive documentation is worthless if working software is not delivered appropriately. In the case of the school sign off system failure, having working software which has passed unit, system and integration tests is more important than having documentation to facilitate potential investigations – they could be done with much less.Therefore, I disagree with the author’s claim that the left notion, working software, is not as important as the right one, comprehensive documentation. While documentation is essential for the record or for future maintenance particularly in the case of large-scale projects, I believe that delivering working software should be of highest priority in all circumstances.

Individuals and Interactions over Processes and Tools

The processes and tools are created in order to facilitate the work of people as they are those who create or need the software. The author claims when “allowing the individual personalities of developers to encroach upon the project, standards can begin to slip.” I believe that if a professional was to contribute expertise, he could potentially be beneficial for the project. The author claims that sometimes innovations is not necessary and code reuse is encouraged; I think this applies not only in large scale projects. However, if encouraged when necessary, innovation could bring flavour to the project and distinguish it. Of course consistency must be ensured by coordinating innovative steps with the management. I agree that using established methodologies is safe practice; however, deviating from the processes if necessary would not be detrimental for the project if appropriately scheduled. In the end of the day it is the individuals who decide on given processes and tools for certain projects, so their input is supreme for the success of the project.

Customer Collaboration over Contract Negotiation

The author claims that it is essential to have a contract in place outlining as much specificatications of the system as possible before the software development begins. He does not claim that there must not be any communication between the customer and the developers, but a contract would protect the developers from unexpected and challenging change. I believe that planning in the context of large scale projects should be done extensively to the best of customers’ and developers’ knowledge, and a contract capturing these agreements should be put in place. However, having a contract might bring a false sense of security to the development team who might implement uncertainties wrong rather than reaching out to the customer. Additionally, large scale projects take plenty of time to complete, and a change in the requirements over the time is likely. I understand that an iterative delivery approach is extremely challenging in the context of large scale projects. “Humphrey’s Law” claims that do not know when they would want unless they see it working. I believe that an effort in that direction will contribute towards success, as many issues might be resolved. Furthermore, I believe that if the risk is mitigated, an attempt for capturing customers’ needs not listed in the contract should be made. Excellent software within budget is achieved through collaboration and compromise of both developers and clients.

Responding to Change over Following a Plan

Having a plan to follow certainly brings numerous benefits to the table. The author claims that making changes is costly and following a plan allows the project to stay consistent and within budget, and facilitates progress measurement. Also, he claims that in terms of large scale projects, if a change was to be accommodated, the communications could become “bogged-down,” and deviations from the plan might become disastrous. However, I believe that the communication challenge could be appropriately handled by the management. Additionally, industry data shows that 60% of the software requirements change over the development process. Clients might thoroughly create a plan, but sometimes they might not be able to predict some changes, and certainly bringing change to a large scale project is incomparable to altering a small one. If a large project was delivered on time and within budget, it would be worthless if it did not satisfy customers’ needs. I believe that even though responding to change and revisiting the plan might introduce additional expenses, taking actions would be better than delivering software which does not solve the problem and will eventually end up on the shelf.


Implementing large scale systems is different from implementing smaller projects,  but the software development process faces similar challenges, challenges which the agile methodologies helped overcome. Adopting an agile methodology in a large scale project might be questionable; however, some of its practices represent “good practice” in software development. They might be as beneficial as traditional methodologies, and taking them into consideration could bring flavor to the development life cycle.


Sutherland, Jeff. “Agile Principles and Values.”


Response to “Agile Software Development in China”


This article is a response to Agile Software Development in China by s1314857.

In the article, the author presents an overview of what Agile Software Development means and some of its principles together with the challenges firms in China are facing to adopt this methodology.

The main point of this article is to challenge one of the author’s main statement about why Chinese software businesses did not adopt Agile Methodologies (“non-large scale companies do not implement standard methods“). I  will argue that small companies can easily adopt Agile and I will provide examples and methods to overcome the possible challenges. In the first sections I will also touch a little bit on the structure of the article and suggest some improvements.

First let’s start with the Introduction

Where is the Introduction? There is no introduction.

While the author starts with a good overview of what Agile Development means, I found it hard to find a structure of the article, struggling to find an introduction paragraph. Therefore, I did not know of whether it will talk about software companies that use Agile or maybe Agile methodology successfully implemented in China. Only after reading the 3rd section everything became clearer.

In conclusion, I really think that this article would have benefited from a better organization, by providing an introductory paragraph which would mirror the conclusion.

Where do the ideas come from?

While some of his arguments are well structured, there is little evidence that can support his statements, like: “Firstly, there are lots of software companies in China, but few of them are large-scale.” or “Some Chinese development teams prefer the cowboy coding, which is a term refers to go-as-you-please and unplanned coding style.”.

Arguments like these bring little value to the content of the article and make it seem that the article brings many ideas into play. Also I do not understand if the arguments come from personal experience or they comes from a specialized source, therefore it fails to convince the reader about how Chinese businesses actually work.

In conclusion I find it quite hard to argue his statements since I do not really know the source. It should have been specified whether the ideas in the article come from personal experience or from a reliable source like a book or survey over the Chinese software companies and what the challenges they encounter are.

Agile in Small Companies

I will continue this section by studying some of the challenges a small team might encounter when using Agile techniques, but how easy it would be to overcome them. By looking at the size of the team, the number of projects a team can have in a sprint and how they handle refactoring I will try to demonstrate how Agile can be incorporated in the culture of a small team.

Size of the Team

Ever since the launch of this methodology, Agile development has been seen to fit small teams of experts. A team of four or six developers can therefore work perfectly.

The main roles in an Agile team are: Team lead (also called “Scrum Master”, who is responsible for facilitating the team, obtaining resources and protecting it from problems), Team member (developer, which can include modelling, programming, testing, release activities, etc. ), Product owner (person responsible for the prioritized work item list).  Also having an even number of programmers facilitates pair programming, and having a meeting with the whole team might just involve turning around the chairs.

There is no success scheme into implementing Agile Methodologies and each of them cam be adapted to the size of the team while maintaining a number of self-motivated team members.

Too many projects competing for limited number of developers

Smaller teams of developers sometimes spend too much time swapping between projects (people are not like supercomputers to switch context whenever its needed with no effects), so the efficiency of the team could reduce.

A solution to this problem would be to shorten the sprint length (the time period between two deliveries), so that the sprint can concentrate only on one project at the time. Even though unexpected problems might arise, solving them could be easily added to the next sprint rather than interrupting the current focus, giving us an increase in the productivity and quality of the code. A simple Kanban board (a simple model consists of three columns: “to-do”, “in progress”, “done”)  can also be used so that the team  to impose the constraints on the work that the developers are doing (limiting the work-in progress), so they do not do unplanned work.

In conclusion, there need to be some sort of compromises in regards to the amount of work a team undertakes during a certain sprint in order to keep the efficiency and productivity of the team to the maximum.

Refactoring older code

Time for deep refactoring is sometimes lacking in small teams that have many projects, although this is crucial in a Test-Driven Development approach and maintaining a clean code database.

This is where pair programming comes into help, having two sets of eyes on every ticket reduces the time for refactoring, this way developers help each other by talking about what kind of tests to write and go on for a better start. One problem that refactoring is addressing is about functionality that is never used and makes the code unreadable. By pairing the problem can easily be addressed and it is more likely to build in scope.

Many mistakes are caught as they are typed, pair programming encouraging code review.  This way refactoring can be done on the way as well, needing less time to spend afterwards.

Even though the time for refactoring older code is sometimes lacking, by using a pair programming approach helps us write better code,  more likely to build in scope and therefore spending less time for refactoring.


I have looked at some of the challenges that a small team might encounter when using Agile techniques and provided examples of how to overcome this problems. At its beginnings, Agile was designed for small teams and its purpose was to enhance communication and interaction. What make Agile methods different is the way they deal with development and how the teams need to adapt to this working culture based on communication and cooperation.


Response to “Using Social Networks to Analyse the Stakeholders of Large-Scale Software Projects”

1. Introduction


This is a response article to “Using Social Networks to Analyse the Stakeholders of Large-Scale Software Projects”.

In this article, the author proposed an concept of using social networks to analyse the stakeholders of large-scale software projects[1]. He also showed an example to explain this idea.  Furthermore, he demonstrated his own opinions about this concept.

I am going to talk about advantages and disadvantages of this article, and I think more things should be considered if we want to use author’s concept in reality. These things can make the result more reliable and correct.


2. Discussion about advantages


I think the author chose a very good topic, and he explained this concept very clearly by demonstrating a example. This concept is mostly based on the paper about StateNet [2]. I think this topic is very useful because lack of user involvement is the main cause of project failure, and success is rare [4]. Reports suggest that 34% of projects succeed in 2004, 35% in 2006, and 32% in 2009[5]. The author used subtitle to make the logic of this article very easy to follow. The author clearly stated his position in the conclusion part of the article. I totally agree with the idea that overlooking stakeholders is possibly the most common mistake in development efforts [6].

3. Discussion about disadvantages


However, I thinks there are some disadvantages of this article as follows.


3.1 Structure should be more upfront


The author spent most of his article to explain his concept, and he talked about his own opinions about this concept in the conclusion. I think his introduction section should mirror his conclusion. I think the structure of article could be improved.


3.2 More personal opinions rather than explanation of the concept


Like I said before, I think the author spent too much of his article on explaining the concept. The author proposed his own idea in the last section, which are

  1. Prioritise the stakeholders with some measures due to their importance

  2. It allows every stakeholder to have a say in this, so all people are considered in deciding the importance of all others [1].


The author talked about two advantages of this concept. However, I think the author may think about more about this concept, and it could be disadvantages of concept, what other areas can be used, future work, different criteria to evaluate the result and so on.


4. More things could be considered


To make the result more reliable and objective, I think more things should be considered.


First of all, the number of stakeholders of a software project is dynamic, in other words, the number of different roles of stakeholder is always change as well. This dynamic process should be considered if we want to build a weighted graph. Perhaps we can use other theories to make our result more reflective of real situation, such as fuzzy set theory [3], which can solve the fuzziness and uncertainty of big and dynamic data.

Secondly, the author believes his concept allows every stakeholder to have opinions, which can decide the importance of all others. In my opinion, if every stakeholder can have opinions which decide others’ importance, the objectivity of the results will be questioned. Relationship between different stakeholder should be considered.

Thirdly, the consistency of role descriptors should be considered and this could be a problem if different stakeholders have different opinions about a same stakeholder. Furthermore, one people could have different roles in real life, so how to describe him/her in the social network should be considered as well.

Finally, building a new social networks takes time and effort, and the accuracy and representation of new social networks is not sure. We have a idea that should we build the new social networks based on useful information from existing completed networks? We can use meaningful data, remove dirty data and add new data to build the new social networks. We believe this method could be more efficient and accurate.

5. Conclusion


In this article, we first made a brief introduction of Awad’s article [1]. Then, we discussed the advantages and disadvantages of this article. Finally, to improve the reliability and objectivity of the result if we use author’s concept in really, we believe more things should be considered, and we talked about these things in details.




[1] Using Social Networks to Analyse the Stakeholders of Large-Scale Software Projects. Nadi AWAD. February 17, 2014.

[2] StakeNet: Using Social Networks to Analyse theStakeholders of Large-Scale Software Projects. Soo Ling Lim, Daniele Quercia, Anthony Finkelstein. In Proceedings of the 32nd International Conference on Software Engineering, ICSE (1) 2010, pages 295-304

[3] Chen, Shouyu. philosophical basis of variable fuzzy set theory. s.l. : The Journal of Dalian University of Technology , 2005. 53-57.

[4] Standish Group. The CHAOS Report, 1994.

[5] Standish Group. CHAOS Summary 2009, 2009.

[6] D. C. Gause and G. M. Weinberg. Exploring Requirements: Quality Before Design. Dorset House Publishing, 1989.

“The Large Hadron Collider was created to help unlock the secrets of the universe. And also to create a working SOA implementation.”


Service-Oriented Architectures (SOA) have become fairly commonplace as an architectural pattern for enterprise applications. The idea is to implement a core data model or repository, which stores the organization’s data. The data model then connects to a service layer, in which many services can be implemented connecting to the same underlying data model. Applications can in turn be built on top of this service layer, which serves an API for those applications.

The service layer is the means with which an application would connect to the data model, providing different services for the client to connect to the underlying repository depending on which data the service has access to. Essentially, SOA turns what might traditionally have been regarded as single applications, such as Facebook, into an ecosystem of services which third-party applications can connect to, to make use of its data.


An illustration of SOA [2].

According to the SOA Manifesto [3], SOA aims to be business oriented, which is expressed as: ”Business value over technical strategy”. Inherent to SOA is the goal of capturing how businesses work rather than devising a technical strategy and then fitting the business into that strategy. Another mantra is ”Flexibility over optimization”, meaning that the de-composed and modular strategy provides flexibility, but might have negative effects on, predominantly, speed, since the flexibility allows for the use of different protocols that might not always work well together.

Why would we want to use this in the first place? 

The main selling point of SOA is the loose coupling exhibited by such architectures, both between different services themselves in the service layer and between the service and data layers. The data is independent of any service, meaning that one can implement multiple services in the service layer and ”plug it in” to the data layer. In much the same way, for developers with potentially no relation to the organization, the service layer becomes an API, which they can use to connect their own applications to access the data that the service itself has access too. The underlying functionality becomes a ”black box” to which you connect and it just works™.

In turn, this loose coupling makes the code more easily maintainable, since changes in one service should not change any other service and changes in the data are reflected back in all services in which that data have been made accessible. Independent applications are, of course, not affected at all, unless the API itself changes. The reusability of code is also increased, since applications now only have to make use of the public API, potentially reducing the complexity of those applications. An organization then, does not have to implement separate data models for different applications and handle any synchronization between them.

An example of this is Facebook, with the Facebook Developer initiative [4]. The idea is that Facebook has its data stored in a data layer and that apps can then plug in to this layer via the service layer to make use of the data. In the case of Facebook, the data is most often user data and it can allow for services such as ”Facebook login” from other, completely independent apps for instance. Another example is the tale of the giant enterprise, as if Facebook isn’t one, with an all encompassing Enterprise Resource Planning (ERP) system. Many different parts are handling different aspects of the organization, and every part needs access to much of the same data. In such cases, having duplication of information and the risk of information differing depending on which data model you access can lead to all kinds of nasty synchronization surprises.

So far all is good and well in the land of service orientation. Or is it? 

Before going into the details of the flaws of SOA, it’s worth mentioning that the SOA Manifesto clearly states that SOA implies having to compromise on certain things. It’s a trade-off, and some aspects are more important and higher valued than others. However, the compromises made in SOA architectures induces some risks that the flexibility of such architectures does not justify.

The biggest problem is that an SOA approach might not always be the most sound choice from a security perspective. As we introduce the flexibility inherent in the SOA architectural pattern we also increase the attack surface, potentially sacrificing the integrity of our data. Suppose that we have a database which is accessible through a set of services, a set which might be extended with more services in the near future. Now suppose that some of this data should be inaccessible, either completely or from a particular service. We now have to make sure to validate any input from third-party applications in all of our services, increasing the risks of programming mistakes, since the validation has to be duplicated. Any flaw in the validation opens up for different kinds of injection attacks such as SQL or XPath injections [5].

The issue is that SOAs enforce a communication model that forces developers to think about security at a much larger scale than in the past. The problems also increase when communicating over organizational boundaries, where authentication becomes paramount. Third-party tools such as OAuth are intended to make these scenarios viable from a security perspective, but OAuth is far from being the perfect solution [6][7]. The problem with OAuth and other protocols, such as WS-Security, is the complexity of the protocols themselves, as well as the complexity that they add to what might already be a complex project. As much as developers should understand security, the reality is that many don’t, a reality that will lead to problems in the case of an inadequate implementation of OAuth or any similar protocol.

Another aspect of SOAs not being designed for redundancy is the fact that one has services designed for specific purposes. These may, however, create bottlenecks as more and more applications connect to them and in turn to the database. To be fair, this is no different from a traditional client-server architecture and any type of bottlenecks could be mitigated using a Content Distribution Network (CDN), albeit most companies don’t have the resources of Google to create as effective CDNs. What SOAs add to this, however, is an increased level of complexity, not necessarily in the amount of code, but in the amount of entities that communicate and interact with each other [8]. The system itself grows more complex, and with more parts, there are more things that could potentially go wrong.


SOA is intended to model our world of constant interconnection, with all kinds of applications and services talking to each other and sharing data. There is however, a naivety in this approach and in thinking that communication and data sharing is always benefitting to what we want to achieve, especially compared to the risks that it may introduce, but also with regard to the noise introduced in our data.

Not only are we increasing the level of risk, but we are also introducing new types of risks, risks that we have little or no previous experience of handling, at least at the scale of some of the SOAs present today. Security protocols often lack in simplicity, making them hard to implement without leaving any security holes. It is not a sound strategy to prioritize reusability and business value if we cannot ensure the security and maintainability of our applications.

Having said that, my suggestions are along the lines of the chosen answer to a StackOverflow question on the most important patterns: ”My most important pattern is the “don’t get locked into following patterns” pattern” [9]. With regards to SOA, this is sound advice, as the ever-increasing complexity of your architecture is bound to cause you trouble.


[1] Title quote:

[2] Image: