Combining quantity and quality in large-scale projects

Introduction

Most software managers agree that small groups of experienced and really capable programmers are far more effective than huge groups of programmers working on a project. They also claim that they can easily coordinate a small group of efficient programmers while in big groups communication is always a big problem. However, some large software projects are too large to be completed by a small group of programmers, even if this group contains some of the best programmers available in the industry. This article focuses on one of the most important dilemmas in Software Engineering: quantity or quality is more important in large-scale projects? Is there any solution that could combine both of them and bring the optimal results?

Summary

Fred Brooks in his book “The Mythical Man-Month”-Chapter 3 [1] presents a study conducted by Sackman, Erikson and Grant about programmers’ productivity. This study proved that there is a significant difference between good and bad programmers in terms of productivity. A good and experienced programmer can be up to 10 times more productive than a bad or inexperienced programmer, even though the salary paid to an experienced programmer is usually double the salary an inexperienced programmer gets. Therefore, managers would prefer to hire a couple of really good programmers and allocate tasks to them, as this would be more cost effective.

However, large-scale products require thousands of work hours and cannot be completed just by small groups of excellent programmers. So, hiring more programmers, even if they are mediocre or inexperienced, is absolutely essential for large projects. But this poses a new challenge for projects managers. They have to find solutions for combining good and bad programmers in an efficient way. They also have to coordinate a large number of persons and a large number of groups. Finally, communication and decision making between groups and their members is also a difficult problem for managers.

Harlam Mills proposed a solution [1] about such large projects. According to Mills all large projects should be divided into smaller tasks. These tasks should be allocated to small groups of roughly 10 persons which have a specific hierarchy. Mills states that these groups should operate like a surgical team where a surgeon is surrounded by anaesthesiologists and nurses, each of them having a specific role. The structure of this team should be the following:

 

  • The surgeon or the chief programmer. He is the team leader and the person who designs the program, writes the code, runs tests and writes documentation. He should pose versatile skills and significant experience.
  • The copilot. He is the surgeon’s assistant. He should be able to do surgeon’s work and he has similar skills but slightly less experience. Copilot can also offer advice to the surgeon but the surgeon in the only person who takes all crucial decisions.
  • The administrator is the person who handles human and material resources. The surgeon should typically take decisions about personnel, money or machines but he rarely has enough time to do it. Therefore, the administrator is responsible for these issues and also he is the person who is in contact with other groups.
  • The editor is the person who writes the documentation. The surgeon usually prepares a draft of the documentation and then it is the editor’s job to criticize it, rewrite it and provide references.
  • Two secretaries. The administrator and the editor should have a secretary. Administrator’s secretary usually is responsible for correspondence.
  • The program clerk is responsible for keeping technical records of the team. According to Mills proposal all files are visible to all team members. So, the clerk collects all input and output files and stores them in a chronological archive.
  • The toolsmith is responsible for interactive services such as text editing, file editing and debugging. He should reassure that all this services are working properly as they are essential tools for the team.
  • The tester is the person who runs tests and provides assistance in code debugging. The surgeon is the person who writes the tests but a tester is necessary to run these tests and provide feedback about the correctness of the code.
  • The language lawyer is a programming language expert. He is responsible for finding an efficient way to use the programming language in order to solve demanding and tricky problems. He should be a system designer and usually two or more groups share one language lawyer.

 

According to Mills this structure has two main advantages compared to an ordinary team of two programmers. Firstly, in an ordinary team the work is divided so each programmer needs its own disk space and access. However, surgeon and copilot work together thus saving material and time [1]. Secondly, in a team of two programmers all disagreements have to be solved by discussion until they reach in an agreement or compromise; but this is often time consuming. On the other hand, the surgeon is higher in the team hierarchy so he is the one decides and takes the responsibility of decision-making. This hierarchy is much more effective in software projects [1].

Discussion

Mills proposal has many positive ideas if examined in an abstract way. The main point of Mills idea is hierarchical structure. This structure is nowadays a commonplace for almost all projects. In each organization or group there should always be a person who will always take a decision.

Non-hierarchical structures are not efficient as group members always express different opinions and proposals. This is reasonable as each person has a different understanding of the problem and a different solution. But in such groups one solution should be decided so that everyone should work on it. In a non-hierarchical structure decision making should be made under discussion, cooperation and team consensus. However, this is ineffective, time consuming and in some cases impossible. Therefore, a team leader is always essential in order to coordinate and team members and take a final decision when it is necessary.

According to Brooks [1] when more people are added to large-scale project overall productivity falls especially due to bad communication. Mills proposal gives a solution to this problem. Mills proposed structure takes advantage of small groups efficiency in order to deal with large scale projects. Combined with the benefits of hierarchical structure mentioned above communication between groups is much easier following this model. Chief managers can communicate and give instructions to group leaders, the surgeons, and they can also coordinate their group giving appropriate instructions. Moreover, following this proposal the best and more experienced programmers are in the key positions designing and writing the code, while less experienced or bad programmers take the editor, clerk or tester role. Therefore, good programmers can focus on the important part of the project –design and coding- and at the same time they do not have to waste time in other tasks.

However, Mills proposal is not always effective if examined in a more detailed view. Hierarchical structure is considered essential for all large projects but the question is how can we divide personnel into smaller groups and how we can organize these groups. Mills’ surgical team is not suitable for all problems. Some projects may require teams with more programmers and less auxiliary personnel such as clerks, testers or editors. According to Hans Van Vliet’ book [2] groups can be divided according to people’s specialties. For instance a group could contain more than two software engineers, web developers or network administrators working on different parts of a specific task, under the guidance of an experienced chief programmer.

To sum up, Mills proposal offered an excellent approach on diving large-scale projects to smaller tasks undertaken by small groups of programmers. It also demonstrated the importance of hierarchical structures and showed an efficient way of using experienced programmers without wasting their time on time consuming tasks. However, groups’ organization could be different as more programmers may be required. Therefore, Mills plan should be the basis for all large-scale projects but group division should be decided according to the needs of the problem.

References

[1]. Frederick P. Brooks Jr. – The Mythical Man-month: Essays on Software Engineering – Chapter 3- 1975

[2] Software Engineering: Principles and Practice – Third Edition – Hans Van Vliet – Chapter 3 – 2008

 

 

Design Patterns: are they so difficult to understand?

Introduction:

Design patterns provide a template solution to a number of different problems, encountered while developing object-oriented software. They do not offer any finished source code (e.g. in the form of library), but more or less they provide enough description on the way how to solve a particular problem. They have been used for around 40 years and they have proven to be good (they are essential part of the skill-set of any software developer). They ease the process of software development in the way of providing a common language for the developers with different level of experience. But in order to use them, one must understand them very well. Although they are very mature concept, people nowadays are still arguing what is the best way to study them. According to some, the first step is to read a book which explains them. Other think that first a developer should learn how to test, then how to refactor and in the end how to apply them. But everyone agree that essential part of study is to start coding with them as soon as possible. In the next few sections I will cover two design patterns I have used in projects developed by me and I will also explain the steps I took in order to understand them.

 

Single Responsibility Principle:

The single responsibility principle is one of the five steps covered in the SOLID design and its states that a class should do only one thing. The idea behind this object-oriented design, is that if all the steps are applied while developing a particular piece of software, the end product will be a robust system that is easy to maintain and in the meantime is open for extensions.

Last summer I had the opportunity to work on a relatively big project with some experienced developers. I was the main developer of the project and I was receiving supervision and advices from my colleagues. I had to develop a sub component of the main application, capable of querying data from the RSS feed of the Apple iTunes store and storing into a database. Basically the steps for accomplishing this task were the follows:

  1. Query the RSS with some parameters and received the result of the query in the form of XML file.
  2. Parse the XML, extract some values from it and used the extracted information to create more lookup queries.
  3. Receive the response from the lookup queries in the form of JSON files, parse it and store it in the database.

So in general I end up with 5 classes (Main class, two classes for parsing XML and JSON, and a utility class with all the functions I needed) for accomplishing this task. They were logically structured in the same way as the requirements stated above. The component was working as expected and then I showed it to my supervisor. He said that it will do the job, but I need to do some serious refactoring if I want it to be readable by other developers. And by that he meant that if this structure makes sense in my head, this doesn’t ensure that it will do the same if other people look at it. The thing he proposed was to implement the single responsibility principle and further split my classes into more logical entities. After some refactoring, I end up with 8 different classes which were structured in the following way: Main class, two classes expressing the structure of both the XML and the JSON files, two more classes for parsing them, one class for generating both XML and JSON queries, a database handler class and a class representing the end product of the component. So to summarize I have changed the structure of the component in such a way that every class was responsible to do only one thing. At the end this made sense to me as well, because it was definitely easy to debug, and because I gave meaningful names to the classes, a developer who only knew the main idea of my project was able to understand the structure and the logic I am following just in a matter of few minutes.

Factory Pattern:

The Factory pattern is another heavily used concept. In general a factory class returns at runtime one of several possible objects that share a common superclass. The possible cases for using it is when the developer wants to provide an interface for creating objects or simply to encapsulate the object creation.

A few years ago I had to contribute for the developing of a tower defence game in JavaFX. The game was simple as it consisted of single map with multiple levels of monsters and five different towers. This was the first time when I saw the factory pattern in action. While I was browsing the source code, in the package responsible for creating the units in the game, there were a number of different classes:

  1. An abstract class called EnemyUnit with private attributes name (string) and damage (double), and public methods for attacking, moving, displaying and setting the damage and the name.
  2. Six different classes representing the different types of monsters in the game, each of them implementing the abstract class EnemyUnit with setters for name and damage.
  3. A class called EnemyUnitFactory which returns an object of class EnemyUnit. The factory class consists of a list of key-value pairs where key is an integer from 1 to 6 and the values are one of the six different unit classes. When the class is invoked, a random number generator generates an integer (with range from 1 to 6) and uses it to access one of the elements in the list. After the element is accessed and object of the given type is created and returned.

This design expresses the main idea behind the factory pattern which is to ensure that the factory has only one job – to create enemy units at run time, and also ensures that there is only one class capable of creating ships.

 

Conclusion:

There are a lot of different design patterns who can be used straight away, or need some tailoring in order to fit in the project. Nevertheless they have proved to be useful and in my opinion every software developer should be familiar with them. There are numerous books with excellent explanation about different design patterns, but according to me the best way to understand them is with coding and practicing.

References:

  1. http://en.wikipedia.org/wiki/Software_design_pattern
  2. http://www.inf.ed.ac.uk/teaching/courses/sapm/2013-2014/sapm-all.html#
  3. http://en.wikipedia.org/wiki/SOLID_(object-oriented_design)
  4. http://en.wikipedia.org/wiki/Factory_method_pattern
  5. http://www.oodesign.com/factory-pattern.html

 

When do conventions become detrimental?

There are many good arguments for creating and following conventions (I am referring to rules and patterns here, not conferences). However, conventions rarely pop up in lists about essential advice to follow in large scale software projects. Think of the common code convention which most teams have, that everyone should follow the same formatting etiquette. Teams tend to a code convention and it is not necessary to impose it. In other words, software developers create conventions when needed and we can find examples where imposing them would be detrimental, even if in good spirit.

Main advantages and drivers

Usually conventions follow from patterns. When possible, humans prefer to follow patterns in many aspects of their lives, as it makes decisions easier. Think of your cutlery drawer: why do you separate knives based on their type? (If you don’t, you should.) It is because it boosts your cooking productivity with a minimal effort. Once you learn where everything goes, you will know where to find it.

The general idea when creating conventions is to find patterns that simplify or abstract tasks; then automate where needed or possible and finally impose conventions such that the automations don’t fail. So when you don’t have to look for the bread knife in your cutlery drawer because you know exactly where you (always) put it, it is like when you don’t have to look for that opening curly bracket in your code because you know exactly where you (always) put it – probably before the extra indentation.

Conventions can be of benefit to project management and productivity in a variety of situations. As we saw earlier, one of the simplest conventions is a code convention – this is an example of a convention that, if not followed, will not break the builds. Some more complex conventions can be made about naming or project structuring – here, if the convention is not followed, the build might break (unless extra configuration is possible and made).

The main advantages of conventions are better productivity and developer happiness. Rules are part of our lives and serve the purpose of making them easier. We don’t like repetitive tasks and we don’t like when confusion gets in the way of our work. When all team members follow the same rules, one member’s code from the previous year can be as easy to understand for another member as their own code from the previous week.

Main dangers and pitfalls

We assumed patterns are universal. Conventions easily fail when a pattern is assumed where it shouldn’t. Think again of your cutlery drawer and suppose you are trying to learn how to use chopsticks; you assumed you would never need room for them, but now you have to break convention to find an appropriate place.

Note that conventions can be self-imposed, but they can also be imposed by external tools. Some you can easily change; maybe there was room for chopsticks in your drawer, but you just used it for something else. Others are more difficult to reconfigure; maybe you simply can’t accommodate your new chopsticks in your cutlery drawer.

We also assumed that once a convention is established, all team members will follow it. However, this might not always be true. Maybe by accident or oversight, someone will break the convention at some point. Maybe by necessity, they will have to work around the convention to achieve a goal that does not fall within the established pattern. The problem here is that a convention also creates expectations. When someone else will try to fix a bug in this code after the author left the team, they will expect the conventions to be followed. Thus they might lose more time getting over the initial confusion than they would have if there weren’t any conventions.

Imposed conventions

After thinking about the good and the bad of using conventions, one might wonder what the signs that tell us to avoid creating conventions are. To better identify them, let us first try to put them in contrast to the signs that tell us we should create conventions.

A good working example here is Ruby on Rails, or Rails for short. Rails is a framework for creating web applications that has become very popular in the last decade. It embraces ‘convention over configuration.’ This means that many decisions have been made to reduce configuration time for application development by establishing conventions. The problem is that it has failed quite a few times as a framework for large scale applications.

This ‘convention over configuration’ paradigm might be a good indicator of why Rails has failed in these large scale environments. There are some arguments about Ruby’s (the underlying language) inefficiencies etc. However, the decisive issues in a Rails application start popping up when you have to work around the imposed conventions in order to achieve your goal. It has been said many times that you can develop demo applications in Rails and you don’t have to know its inner workings. However, it seems Rails’ conventions are too limiting for when the applications become larger.

The problem with adopting Rails for a new large scale project is that is serves a specific category of web applications. Of course, the Rails framework supports plug-ins (called gems) that add lots of features. While this helps you in the beginning, when you are doing what has probably been done at least a few times by others, it will eventually get in your way. For your application to evolve, you need to break out of the conventions imposed by the framework. Since Rails is based on those conventions, this basically breaks the purpose of using it.

We have to also quickly note a very important aspect. Rails’ conventions are imposed by people outside the development team. This is the main reason they easily become disruptive limitations rather than helpful conventions.

What we can learn from Rails is that in large scale applications, a ‘convention over configuration’ approach rarely works. We should let conventions be established slowly by the developers. There is a popular guideline in software engineering that encourages developers to automate processes they do more than three times. Let us try to extend this piece of advice to cover conventions.

If a process can be simplified (be it finding that opening curly bracket or building the application), establish conventions in your team to make life easier. Be careful though, do not overdo it – three times going through the same process gives you a hint that a convention can be established, but you have to think about the next three thousand times before deciding on your convention. You have to leave room for configuration over convention to avoid cases when the convention needs to be broken.

Conclusion

While growing up as a software developer, conventions help one work in environments that they are not familiar with. As the applications become more and more complex, conventions become more and more fragile. The reason convention is not praised as essential in large scale applications is because developers will establish them when needed, but if they are imposed there is a large risk they will get in the way of good productivity.

A/B Testing: More than Just Sandpaper

In interface design A/B testing is a simple experiment in which randomized groups of users are presented with variations of the same interface and their behaviour is observed to better inform design decisions. This kind of testing is usually used to improve website conversions. That is because the World Wide Web is a media that is uniquely suited for A/B testing – it is comparatively inexpensive to present different users with modified versions of the same website and track their actions. It would not be feasible in the context of traditional media.

With success stories that report 50% increases in clicks by altering phrasing of the link or even helping to win presidential elections a great number of A/B testing services and tools (Google Website Optimizer, Amazon’s A/B Testing and Vanity to name just a few) has emerged. Not to mention the countless web posts.

In 2011 the company [Google] ran more than 7,000 A/B tests on its search algorithm. Amazon.com, Netflix, and eBay are also A/B addicts, constantly testing potential site changes on live (and unsuspecting) users.

— Brian Christian at Wired.com

However it is not the apparent ubiquity of A/B testing or success stories but a particular criticism of split testing that inspired me to write this article. In his blog post entitled Groundhog Day, or, the Problem with A/B Testing Jeff Atwood argues that A/B testing has no feeling, no empathy and only produces websites that are goal driven and can never win hearts and minds. Mr. Atwood quotes his friend’s tweet:

A/B testing is like sandpaper. You can use it to smooth out details, but you can’t actually create anything with it.

Nathan Bowers

I believe this to be a wrong way of looking at it. Obviously, A/B testing in itself cannot produce anything but it can guide the design process and quantify how good the final result is. It is difficult enough to avoid developer’s blindness and to work with the fact, that people do not know what they want, in mind. But in this day and age one also has to navigate the perils of multiculturalism. When developing a website that will potentially be accessed from all around the world, a developer or designer cannot possibly be expected to simply conjure the perfect solution out of thin air.

While Mr. Atwood seems to think of A/B testing purely as a way of monetising, I tend to side with some of the people who have commented on his blog post and think that testing democratises the process of software development and brings better outcomes for both the developers and the users.

I believe these same people cannot read the minds of every single person who visits their web site, or uses their app. Therefore, I think it’s great that these people can test both their ideas, rather than having to make some evidence-free guess and rationalize it after the fact. An A/B test is only as good as your best idea, after all. Ideas still matter!

Lukestevens

This is not to say that good designers are unimportant but they cannot always predict what will attract the users to interact with their designs. A/B testing has shown time and time again that in some cases the solutions that violate the rules of visual composition or could even be perceived as vulgar are the most appealing.

To try and boost donations the digital team attempted to improve the design by making them look “prettier”.

That failed, so in response an “ugly” design was tested to see if that made any difference. This involved using yellow highlighting to draw attention to certain text within the email.

To the team’s surprise the ugly design actually proved to be quite effective, though the yellow highlighting had to be used sparingly as the novelty wore off after time.

David Moth

I can see where Jeff Atwood is coming form. It might seem that such scientifically rigorous tests subtract from the artistic, inspired or simply human qualities of design. Or that corporate values might suffer in the face of corporate greed. However, I am a firm believer that benefits of A/B testing far outweigh the risks. With the later being non existent when testing is treated as the irreplaceable source of insight it is rather than the deciding voice.

I might be going a step too far here but — honestly — the existence of A/B testing makes me hopeful that elegant solutions for other difficult software development problems (e.g. project timeline prediction) might be within our reach as well.

Video games are art, but we don’t code them like it.

Introduction

Games are software-driven projects, that employ hundreds of developers to create a complex product for a competitive market. Yet the entire mindset of software developers in the game industry, including the development methodology they use, is not up to the task of grappling with the deep thematic and artistic consequences of the systems they construct.

Games as software: Large-scale and long-term?

Video games are big – really big. Not just in terms of the size of the industry, but also the scale of the projects delivered. Major releases are hugely complex software projects which take years to develop, and are expected to serve as the baseline for expansions to the original product as well as sequels that use the same components. Some games are intended to never end, virtual worlds that are constantly debugged, updated, and tweaked by the developers. Games also stretch horizontally across disciplines, bringing together dedicated game designers, programmers, musicians, artists (both concept and asset), voice actors, and support staff to try and produce a single, seamless work.

Despite their interdisciplinary nature, games have tended to be approached primarily as software development projects, which makes some sense given video game’s nature as software-driven projects. While the single highest cost continues to be art assets, much of the technology that powers the artist’s work is developed by the game’s programmers. And code is critically important to one of the key elements of any game – its mechanics.

Programming is a fifth of the budget, but much of the art budget goes to software development as well.

Mechanics as Code

Mechanics are the structures (usually systems of rules) the player experiences in his interaction with a game, and are what helps contextualize the contributions of all other disciplines. They are implemented by programmers in the game’s code, and further speak to the critical importance of software in tying games into a coherent whole. The critical community spills gallons of ink arguing about what mechanics are and how they should be used, but generally agrees that they are the benchmark for deciding if something qualifies as a game (this is ignoring the vocal contingent who are against defining games at all, in the interest of brevity).

For example, in a game about a square-jawed hero mowing down waves of horrifying demons, one of the major mechanics might be how input is translated into movement and shooting. In a game about leading a civilization from the first cities to modern history, there would need to be a mechanic to model technological change – and to allow the player some degree of influence over its progression. And in That Dragon, Cancer, there’s a mechanic for you to try to comfort your confused, weeping child as his cancer progressively worsens. That Dragon, Cancer also ensures that you can never succeed – that in the end, you’re always left listening to him sob desperately with nothing you can do to help.

Mechanics as Metaphor

As that last example makes clear, game’s mechanics are not just rope to tie the real, artistic components together – to some extent, *they* are the real component. The deliberate and designed interactivity unique to games arises from their mechanics, and the core elements of traditional media (such as narrative and art) are ultimately contained within one structure or another. Critical theorists within game studies have become increasingly interested in how the mechanics of games work to influence the overall artistic work, and how the mechanics can make their own artistic statements. Naomi Clark’s piece on Gone Home‘s use of mechanics, specifically the way it deliberately denies the player access to one they normally use freely, is a good example of the power mechanics have to make significant statements. Just as a character in a novel can be a significant sounding board for one of its themes, a game’s mechanics can be a powerful thematic device by deliberately including or excluding options.

In the end, mechanics are fundamentally tools of control, which often present an illusion of choice as a disguise for their real purpose: to constrain the player’s choices into carefully chosen vectors. In our demon shooting example, the player would probably be given a variety of different weapons to choose from – but no option to put his weapon down. The choice offered is fundamentally illusory, as each choice has in turn been deliberately included in the game so that the consequences can be properly programmed. The Tyranny of Choice is part of the latest round of debate on how much agency players truly have within this constrained structure, but that mechanics are as much a part of the game creator’s toolkit as narrative elements is clear enough.

Ludonarrative Dissonance: A break between mechanics and story

Ludonarrative Dissonance in Bioshock kicked off the debate about the thematic implications of mechanics, as well as introducing the term used to describe what happens when the themes of mechanics and narrative clash. Clint Hocking discusses how the game Bioshock, which by its narrative was ostensibly a critique of individualist Objectivism, was deeply weakened by the incredibly individually empowering nature of most of the game’s mechanics. Players are literally a one man army, able to tear their way through hordes of enemies – and this in a game which is purportedly attacking Ayn Rand’s vision of an independent hero figure! A mechanic which emphasized the need for community and mutual charity would be better, and would complement the game’s narrative instead of undermining it.

Yet despite this, games continue to pay little heed to their mechanic’s effect on the work – a fact that is making the critical community sharper with its comments. Bioshock was released five years ago, yet there is still little progress in trying to ensure that the mechanics a game has complement its artistic intent. Even smaller, independent titles are routinely criticized for ignoring the message sent by their core mechanics, and how it might clash with the purported message of the game’s writing.

Interlude

Around this point, most readers are not only likely to be extremely patient but also rather confused – isn’t this an assignment for an Informatics course? Am I perhaps some Arts student, who accidentally posted his essay on the wrong course blog?

Stop being a software developer, start being an artist

So let’s bring it back to software design: I believe that the cause of this consistent and concerning inability to engage with the artistic nature and thematic consequences of a game’s mechanics originates (at least partially) from the developers of these mechanics themselves – the software developers and programmers who build them, usually using common software design principles. Susan O’Connor, a writer for video games, noted in an interview that “A lot of times, what ends up happening when you have a room of primarily tech-oriented [staff], it becomes like a software development environment”, and went on to decry the technically oriented thinking that many software designers who work in the games industry continue to hold on to.

Games are developed as software projects – yet they’re not at heart. They’re pieces of art, even if it’s often high budget, commercial art. The design and development of the game’s major and minor mechanics, as implemented in the game’s software, can have an incredibly important effect on the final result. Yet they continue to be approached as if they’re any other software project, where the design does not have to take into account the thematic implications of decisions but can instead be aimed towards a more objective goal. For example, unthinking application of HCI design principles has been critiqued harshly for effectively eliminating certain kinds of experience from the table – what if the goal of the game is to actually provoke frustration? Or to provide a challenge itself? Any interface which did so would violate the most basic tenets of good design, but might be absolutely critical to the intended message of the work. Approaching a game’s user interface like any other HCI problem fails to take into account that form may need to triumph over function when creating a deliberate, artistic experience.

“Just add art to the spec! We can get to it after we write the unit tests.”

So all we need to do is tweak our existing methodologies, right? Just add “ensure resonance with thematic and artistic intent of game” to our Waterfall requirements, or create a use case for the player’s experience of the theme, or keep it Agile so that we can quickly rework mechanics that feel wrong. Those are all good ideas, and more acknowledgement of the importance of these kind of aesthetic concerns couldn’t hurt, but the core of the problem remains. While composers and modellers who work on games acknowledge their role as artists, software developers continue to act as if they don’t need to adopt a similar approach. Software design methodologies are important to developing a game’s mechanics efficiently and on time, and should continue to be used. But by themselves they are incapable of mimicking the critical understanding and close reading that is required to grasp the artistic consequences of design decisions, let alone understand how to create an intended effect on the player.

Tabletop Games: From D&D, to FATE, to Fiasco

We can look to a similar medium to see how this might occur – tabletop role playing games. These are games played in person around a table, in which players follow a set of rules to act out their adventures. Early games, such as Dungeons & Dragons, often failed to treat their mechanics as anything other than conflict resolution and simulation mechanisms. Yet recent generations of tabletop games have increasingly used their own mechanics to reinforce the intended mood of the game, and as a result mechanics have tended to simplify – not only to enable better control over their exact effect, but also because the increasing understanding of the intended theme of a game enables unnecessary and overcomplicated systems to be stripped down or removed. Instead, each mechanic is deliberately developed with the intended effect in mind – conflict resolution by random elements  (such as dice) are introduced to games where the player is meant to feel particularly powerless or surprised, while group voting is used in games where negotiation and consensus is important to the setting.

Conclusion

On the other hand, the mechanics of tabletop games are far simpler than video games as a rule. Often, one or two people can develop an entire tabletop game, which makes a holistic approach much easier but is all but impossible for larger video games. What is needed is a software development methodology for games that looks to how writers write and painter paint for inspiration and seeks to encourage developers to embrace the artistic nature of their work. This could be a modification to an existing approach, but it can’t slot “ponder the artistic ramifications” as a step between coding and testing. Instead, consideration of the code’s effect on the mechanics has to be holistically and continuously assessed, and given as much weight as meeting more mundane requirements. Until this happens, developers are likely to continue ignoring the thematic consequences of their code, and mechanics will continue to ring against narrative.

New Role of Requirements Engineering in Software Development

Introduction

Since there are increasing needs of rapid software delivery, low development cost, and quick responsiveness to requirement changes for software development in many areas, especially in the business world, the traditional discipline of ‘requirement first’ is not valid for many types of software projects any more. Therefore, it forces us to discover new approaches to implement requirements engineering (RE) process in software engineering.

Background

Requirements are expectations of the stakeholders or customers which specify how the software should behave. They describe the properties and feature of the required software system. Therefore, requirements provide both guidelines and constraints for software development. Requirements engineering process is a set of activities which aim to understand and identify the desires of the customers, deal with requirement conflicts and changes, and refine stakeholders’ expectations into specifications which can be feasibly implemented by software developers.

According to the paper [1], no matter what the detailed RE process is, there are some vital components in RE process.

  • Elicitation: Identify and gather sources of information about the software system.
  • Analysis: Understand the requirements of the customers including overlaps and conflicts.
  • Validation: Check whether the requirements are what stakeholders expect.
  • Negotiation: Reconcile requirement conflicts in order to reach the consistency.
  • Documentation: Document all the requirements in formal documents.
  • Management: If requirement changes occur, they should be managed carefully.

1

Fig. 1 The activity cycle of the requirements engineering process [1]

The relationships among these basic elements are shown in Fig. 1. In practice, the RE process iterates continuously during the software development.

New Challenges for RE

In the last century, the waterfall model has presented a great impact on software engineering. It emphasises that the system requirements should be complete and consistent before the software development process begins. Once the system requirements have been defined, no significant change is expected to occur during the software development since that may lead to restart or doom the project. In practice, it is obvious that the initial assumptions of defining complete and consistent requirements before the software development process are unfeasible, especially, for those large scale projects. Moreover, requirement changes are inevitable, since customers may obtain new visions about the problems which lead to requirement changes. Furthermore, there are also some other factors such as the business environment and policy changes which may also give rise to changes in the software requirements.

Due to those essential drawbacks, traditional discipline is no longer useful for a majority of the projects today. There are four main challenges which lead to rethinking the role of RE process in software development [1].

  • New software development approaches such as construction by configuration arise. The dominant method used in software development is reuse now, especially in the business world. Briefly, construction by configuration means assemble and integrate existing systems and components to create new systems. Therefore, the software requirements depend on not only the expectations of the customers but also the capabilities of existing systems and components.
  • Application quickly delivery. Business environment is changing incredibly rapidly. As a result, new software applications are required to be designed, implemented, and delivered as soon as possible.
  • The requirements change increasingly quickly. Due to the changing business environment, it is evitable that that new requirements may be emerged and current requirements may be changed within short intervals.
  • The consideration of return-on-investment (ROI). Based on RIO, reuse may become one of the most efficient approaches applied by most companies.

Integrating RE with system development

In order to deal with the challenges described above, RE process and system implementation should be integrated. As a guideline, Ian Sommerville [1] introduced three important approaches to tackle these problems, among which, I think concurrent RE and RE with commercial off-the-shelf (COTS) acquisition may be solutions with the most potential.

Concurrent RE

In concurrent RE, we use a software outline to start system development process. During the software development, not only RE activities including elicitation, analysis, validation, and negotiation are concurrent, but the RE process and other processes such as system design and implementation are concurrent as well. This concurrent approach is applied in agile methodologies such as extreme programming (XP) where RE and other processes are integrated.

As mentioned in the paper [1], there are three main advantages using concurrent RE.

  • Process overheads may be lower. Less time will be spent on analysis and documentation for requirements, since requirements can be captured more precisely and rapidly.
  • Critical requirements can be identified and implemented in the early stage. Communications between software engineers and stakeholders can be more efficient, because of the concurrency of different RE activities.
  • Respond quickly to requirement changes. Due to the iterative cycles of requirement identification and documentation, we can respond to requirement changes more quickly with relatively lower costs.

However, documentation will become a problem in XP, since XP software development is based on the stories and scenarios on cards. There is no formal documentation and software engineers usually discard them after implementing the system. The possible solution may be that we can extract key words and behaviors or critical services from customers’ descriptions and then document them for future uses in requirement analysis and change management.

RE with COTS acquisition

So far, COTS systems are already available in most areas. If we can successfully develop a new system using COTS systems and configure them to interoperate with each other, it is obvious that this new system will have lower development costs and quick software delivery.

There are two aspects that will influence the RE process with COTS acquisition. One is the selection of COTS system. The other is the interoperability of COTS systems.

In COTS acquisition, it is usually impossible to find one specific COTS system which matches all the requirements. However, if we identify the key requirements carefully, there should be a set of COTS systems that can fully or partly match these critical requirements. After that, we can consider adapting and extending more detailed requirements based on the capabilities of the candidate COTS systems. This process is called procurement-oriented requirements engineering (PORE) [2] and it will continue until the most appropriate COTS system is discovered.

Companies using various software systems are always concerned about the interoperability between new systems and systems that have been already deployed. From a RE perspective, the interoperability can be simply achieved by using open interfaces and standards. However, this kind of requirement is not enough to deal with practical problems [3]. Therefore, Soren Lausen [4] defined a new type of software requirement called open-target requirement, in order to lower the risk of failure in integrating COTS systems.

Conclusion

Traditional RE discipline of ‘requirement first’ is no more valid for many types of software project, especially, for large scale projects. Moreover, new challenges such as rapid software delivery, low development cost, and quick responsiveness to requirement changes in software development in many domains also force us to rethink the role of RE process in software engineering. Therefore, the idea of integrating RE process with system implementation arises. Concurrent RE process, which has been applied in agile methodologies such as XP, is one of the potential solutions providing quick responsiveness to requirement changes. The other one is to tackle with those demanding challenges using RE process with COTS acquisition based on reuse. Due to the changing business environment and rich resources of COTS systems, COTS acquisition is quite attractive to business companies. Although this approach is limited used in business applications, it has the potential to be extended to other domains such as embedded systems. According to the challenges and characteristics of today’s software engineering, it is arguable that integrated RE process will become popular in software development.

References

[1] I. Sommerville, “Integrated requirements engineering: A tutorial,” Software, IEEE, vol. 22, no. 1, pp. 16–23, 2005.

[2] N. A. Maiden and C. Ncube, “Acquiring cots software selection requirements,” Software, IEEE, vol. 15, no. 2, pp. 46–56, 1998.

[3] B. Boehm and C. Abts, “Cots integration: Plug and pray?” Computer, vol. 32, no. 1, pp. 135–138, 1999.

[4] S. Lauesen, “Cots tenders and integration requirements,” Requirements Engineering, vol. 11, no. 2, pp. 111–122, 2006.

[5] W.-T. Tsai, Z. Jin, P. Wang, and B. Wu, “Requirements engineering in service-oriented system engineering,” ICEBE 2007. IEEE International Conference on e-Business Engineering. IEEE, 2007, pp. 661–668.

On scripting languages and rockstar programmers.

The article “Scripting: Higher Level Programming for the 21st Century”[1] by John K. Ousterhout discusses the role of scripting languages in software development and expresses hope for a bright future of said languages. His hopes have certainly been fulfilled. The scripting languages are still going strong today, with major companies opting in to use them for development of large scale systems (for example YouTube[2], EVE Online[2], Dropbox[3], etc.). What I am proposing is that is not only possible, but oftentimes faster and cheaper to create a medium to large scale project using one of high-level scripting languages as a core of the system.

So, what are scripting languages?

These are typically high-level (do not directly map to machine instructions), interpreted (quick to modify without recompiling), dynamically typed (types can be interchanged to easily combine different objects), garbage-collected (memory allocation is taken care of) languages, such as Python, Perl and Bash. These languages use a collection of useful components that can be written in other languages. They are ideal for manipulating data arriving from a variety of sources, joining it for further processing.

Blasphemy! That can be done faster in C!

An age old debate, occurring every time something new comes along. Assembly language came to replace machine code, systems programming languages came to replace assembly. Scripting languages are just another layer of abstraction, sacrificing some performance for a more concise code that produces same outcomes. The speed difference is hardly noticeable for most applications.[4] Any significant inefficiency often comes from using an incorrect data structure or not applying so.

This means that developers can write code faster, allowing for more time to fix bugs or code extra features. My point is: at this point in time software developer time is expensive, while computer time is cheap. If you need extra power you can spin up 100s of instances of compute servers on the cloud at a click of a button for barely any cost at all.[5] Hiring more developers on a project just because the system is overcomplicated is just a waste of resources better spent elsewhere.

A true rockstar programmer can code in C just as fast as he can in Python!

A rockstar programmer most likely would. The problem is, most software developers are not rockstars, they are average.[6]  Now, with the widespread availability of computers, it is quite easy to start programing. You can pick up the very basics over a weekend. On a scale of [copy pasted ‘hello world’ from a tutorial website] to [can code up a general purpose operating system in a week] most programmers are somewhere towards the middle.

Writing software in low level languages, however, is hard. Your middle of the road programmer might not have had extensive experience with pointers or manual memory allocation. It is a lot easier to write code that is even less efficient than code in scripting languages if you don’t know what you’re doing. Not to say that average software developers can’t learn and are doomed to be mediocre forever, it’s just that the project would take longer to complete.

Python, on the other hand, has a much shorter learning curve and your average programmer will be able to contribute to your project in a short period of time. It is easy and fun, quick to prototype with, quick to recover from failure and to keep programing.

Why not just hire the best developers then?

If you want to get the cream of the crop, you’d better be prepared to pay enormous amounts of money or provide other good benefits to your employees. Whether you’re a medium-sized company or a startup in the process of creating the next big thing™, you have to ask yourself this: Why would a top software developer come to you instead of one of the giants (Google, Microsoft, Facebook, Amazon)? It all comes down to the attractiveness of your company, really.

So, should we just do everything in scripting languages?

Of course not. There are applications where systems languages reign supreme. Applications where every millisecond and every kilobyte counts and require precise control over memory (embedded systems with very small amounts of memory, operating systems programming) can only be efficiently written in system level languages.

Sor scientific simulations – it varies. As mentioned before, scripting languages are usually ’the glue’ that joins other languages. It is possible to have C backed libraries (SciPy, NumPy) that achieve C-like performance while having a nice wrapper in Python. So, scripting languages are not the silver bullet that can be applied to every problem, but it is good enough for most applications.

Final verdict

In the end, should we use scripting languages to create large projects? As with any question in computer science, the answer is always the same: It depends. If you have a good team, the one that understands the concepts well, the project deadline is so far you can afford to spare time to train less experienced developers and you need it to work really fast, then consider a systems language for your next large project. Otherwise, a scripting language may suffice.

 

If you have anything to add or would like to propose a different perspective – leave a comment down below. 🙂

[Note: I have picked C and Python for most comparisons in this article , but a lot of points can be applied to other languages as well]

  1. http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=660187
  2. http://www.python.org/about/quotes/
  3. http://blip.tv/pycon-us-videos-2009-2010-2011/pycon-2011-how-dropbox-did-it-and-how-python-helped-4896698
  4. https://www.udemy.com/blog/python-vs-c/
  5. http://www.windowsazure.com/en-us/pricing/details/cloud-services/
  6. http://c2.com/cgi/wiki?LessAbleProgrammer

Funtional programming in large-scale project development

If you keep updated with blog posts from other programmers out there, you’ll probably be thinking: “Oh no, not another post from some functional-programming fan boy”. And you’re probably right! There have been many people who have been writing about functional programming recently. But this just highlights the general opinion that there is actually something useful to be taken from this age-old paradigm that has never really taken off in the industry (check out this xkcd comic [1]).

Instead of talking about why everyone should be using functional programming because it’s so awesome, I’ve split up this post into sections where I highlight certain parts of large project development where functional programming may be of use and possibly some cases where it’s BETTER THAN EVERYTHING. I’m also not going to mention specific cases of where bank A used functional language B, or comparisons between functional languages and strengths of lazy evaluation or different types of typing. I’ll stay at a high level (of a manager, per se), and focus on factors influencing large scale development.

Small aside: I do favour the pure functional language Haskell in this blog. Firstly, it was one of the first true exposures to programming that I had, and of course, anyone who went to the University of Edinburgh as an undergraduate will forever remember Haskell as the language that Philip Wadler was so incredibly enthusiastic about, making use of the memorable “stupid computer” voice to explain how the processor goes through computation. Finally, it’s a pure-functional language, meaning that you are not at all exposed to the potential of doing things in a different way, as you would get in a multi-paradigm language. You have to stick with doing things in a functional manner and there is no chance for you to go back to an OOP way of doing things where you might be tempted by “it’s just simpler to understand”.

Code size:

A lot of programming is about finding suitable data structures that can hold the information that you need to process in the program. In this ancient interview with Bill Gates [8], he mentions “The most important part of writing a program is designing the data structures”. Functional programming takes a different approach: use only a few highly-optimized data structures such as map, set and lists. You take customizable higher-order functions that will plug into the structures as you see fit. Using these structures means that their methods are well defined and known. This not only reduces the code size (as you are not writing specific and unique methods with which your data structure can be manipulated), but also makes the code easy to read. Reading this other code, you instantly recognize what is a certain method does, because everywhere you look you see the same data structures. This removes the notion of having to understand a data structure in its entirety before diving into the code logic, which can automatically be seen as a positive.

There’s also this notion of Haskell reducing your code size significantly (as much as 90%). Whether this is an exaggeration by a factor of two or an order of magnitude, smaller code size can equate to less to read and therefore less to understand. Of course, the complexity of the code is related to the time taken to understand it, and I do agree with the fact that to us mere mortals understanding things in a procedural manner (this happens, something changes, now test for equality) is simpler than trying to understand things in a mathematical way. But that’s another topic. One thing that’s generally accepted as true is that Haskell can reduce your code size dramatically.

Parallelizing:

The use of functional languages to effectively make programs scalable to multiple cores is probably the main reason why Haskell and other functional languages have garnered so much attention in recent years. Functional languages work on immutable data structures and with the shift from increasing a single-core processor’s clock speed to multi-core processors due to the power wall, a language that can automatically expose the code to parallelization is a huge bonus. Functions work on these data structures that cannot alter their state after creation. This removes the need for all processors to make sure that changes in state have to be known. As mentioned in this blog post [16], Clojure is a great example where your code is parallelized without the programmer specifying anything or worrying about locks or other “nasties” that arise through parallel programming. Libraries have rewritten the map function so that is automatically parallelized. Every time a programmer uses map, they benefit from parallelization.

Code reuse & legacy code:

John Carmack, ex-technical director at idSoftware, makes a great case [9] of a real work example where a functional style of programming has made maintaining code significantly easier. Some parts of a build system relied heavily of state changes with callbacks everywhere that alter the system’s state. This caused pain on an almost weekly basis, where another piece of code he found was written in a purely functional way. This stateless paradigm may be more complex to design for, but when programming maintainable and future-proof code, the simple idea of having input being transformed to some output regardless of system state, where you do not care what is happening in the transformation stage, reduces the number of places where errors and bugs can appear.

For languages like Java or C#, the level of reuse is at the framework level. As mentioned before, this requires a deep understanding of the data structures and how the API was structured if you really want to know what’s going on underneath. Functional languages have this reuse at a more granular level, using the fundamental data structures and higher-order functions to customize how you use the structures. If you’re using something that was implemented in a functional manner, you’d probably be correct in assuming that certain data structures are being used.

Structuring code at an architecture level:

So the first question that most people ask when they think functions and programming is “how am I going to structure my code at a high level?” In reality, regardless of what language or paradigm you are using, the high level involves the same thing: choosing different classes/modules, who calls what, how to split a big task into manageable chunks that can be represented with pseudo-code, and so on. Only when you get to the low-level of actually writing the classes or modules do you get a real chance of thinking in a different manner (OOP vs FP). Architectural patterns like MVC, client-server, multi-tier architecture exist in the same sense with functional languages, as these patterns are less concerned with how the individual snippets of code function and more about how the components of a system should be structured. Of course, people are familiar with doing it in a OOP way, but tradition should never be used to reason your way out of change (otherwise we’d still be burning witches).

Anyway, Haskell uses modules where you can choose which functions to export, just like private and public methods in classes (Java, C#, C++) or header files (C, C++). Plus, OOP principles for clean code like the single responsibility of a method is easily achievable by structuring modules and functions so that they only focus on one thing, sort of like high cohesion in individual classes.

OOP languages shifting towards functional paradigms:

A lot of OOP languages have begun to incorporate functional programming concepts. Both Java 8 and C++11 have lambda expressions in them, and C# 3.0 introduced the LINQ component. This a sign that even in the OOP paradigm, there are enough cases where functional programming can be used as an aid to simplify and produce more readable code (especially in the case of LINQ). You can find a large amount of examples here [18]. My favourite by far here are the ones that reduce large for loops into single line statements which is a great example of reducing code size.

OOP patterns in functional languages:

Countless OOP patterns exist in order to help solve a particular problem by providing a conceptual framework on which to design solutions. But some of these patterns are made redundant in the case of functional programming. The Command pattern [15] is a perfect example.  It provides an interface with a single method: Execute. But this looks very suspiciously like a mathematical function, and the OOP paradigm has to encapsulate it in a class. With functional programming, a programmer gains immediate exposure to this pattern without even realizing it. Similar cases can be made for patterns like Observer [14] and Strategy [13].

In C#, it used to be the norm that you created delegates (function pointers) for certain types of events, but nowadays function types such as lambda expressions or delegates like “Func” do the same thing without a programmer having to write a specific method, with a body placed somewhere in code for the delegate.

Additionally, here’s an interesting piece of information: Peter Norvig [17] found that 16 out of the 23 patterns in the famous Design Pattern book are “invisible or simpler” when using Lisp. So are design patterns signs of where a programming language is failing to help solve a solution rather than effective ways to work out a solution of a problem?

Summary:

Okay, admittedly this post has been at a very high level. A lot of specifics from actual functional programming languages and benefits of these languages have been left out to concentrate on a different message. Programming large projects is a time consuming process, with many developers and managers all interacting to ensure that it is architected in the correct way, it delivers on all the functionalities from the specification, and that it is maintainable for the future. Making a case to your boss or shareholders, that you will abandon the working OOP paradigm in favour of something else, has to make sense to all people, may they be technical or not. To tech-aware managers, buzz words like parallelization, smaller code size and development time, system architecture, and using proven patterns, is what will keep them listening and eager for change. So to anyone reading this expecting a technical comparison, I apologize profusely and point you in the direction of these outstanding writings ([12], [8]).

References:

  1. http://xkcd.com/1312/
  2. http://programmers.stackexchange.com/questions/122437/how-to-organize-functional-programs
  3. http://stackoverflow.com/questions/2835801/why-hasnt-functional-programming-taken-over-yet
  4. http://www.techrepublic.com/blog/software-engineer/when-to-use-functional-programming-languages-and-techniques/
  5. http://www.ibm.com/developerworks/library/j-ft20/
  6. http://lorgonblog.wordpress.com/2008/09/22/how-does-functional-programming-affect-the-structure-of-your-code/
  7. http://www.cs.kent.ac.uk/people/staff/dat/miranda/whyfp90.pdf
  8. http://programmersatwork.wordpress.com/bill-gates-1986/
  9. http://functionaltalks.org/2013/08/26/john-carmack-thoughts-on-haskell/
  10. http://www.javaworld.com/article/2078610/java-concurrency/functional-programming–a-step-backward.html
  11. http://msdn.microsoft.com/en-us/library/dd293608.aspx
  12. http://www.defmacro.org/ramblings/fp.html
  13. http://en.wikipedia.org/wiki/Strategy_pattern
  14. http://en.wikipedia.org/wiki/Observer_pattern
  15. http://en.wikipedia.org/wiki/Command_pattern
  16. http://www.ibm.com/developerworks/library/j-ft10/
  17. http://www.norvig.com/design-patterns/
  18. http://igoro.com/archive/7-tricks-to-simplify-your-programs-with-linq/

 

 

BDD: TDD done right

Test Driven Development(TDD) is a widely spread software development process that has proven successful in practice and is being adopted by more and more companies around the world. However, there are some dangers when applying TDD blindly. This blog post explores some common misconceptions about TDD and shows how they can be avoided using Behaviour Driven Development (BDD).

What is TDD?

Test Driven Development can be summarized as a procedure which follows these simple steps:


image taken from http://code.tutsplus.com/tutorials/beginning-test-driven-development-in-python--net-30137

  • Write an automated test case that fails for the new feature you want to implement
  • Run all tests and see that the new one fails
  • Write the code for the new feature
  • Run the tests, ensuring the new feature works correctly and that no other features were broken
  • Refactor – meet all coding standards and conventions that your team adopts and ensure the new functionality is in the right logical place in the project. Run the tests again to ensure nothing was broken during refactoring.
  • Repeat process with a new feature

This very short development cycle allows the developer to concentrate on just one specific problem and the completion of the cycle signifies that the new feature works correctly. Furthermore, the automated testing facility ensures that if changes impact this particular feature in an incorrect manner, then the test suite will immediately notify the developer of the problem. One more advantage of TDD is that developers are asked to think in more detail about the eventual use of the feature, which lets them get a clearer picture of how it is supposed to act and where it might fail.
TDD has been reported to decrease the number of defects and to limit code complexity. This is due to the fact that features are implemented just in time, which decreases the chances of overbuilding the system by implementing unnecessary features.

Sounds good. So what’s wrong?

Nothing! At least not in the technical sense. However, there is much which can be done to improve the human element of the system.
One of the main problems of the methodology is the usage of the term ‘test’. This automatically puts the developer in a validation mindset – and this is not the purpose of TDD. As instead, Test Driven Development should be a process to guide the design and not to overcomplicate things. This is where Behaviour Driven Development comes in.
In an attempt to escape the validation mindset it redefines the vocabulary to make developers view the process as a tool for specification. Now, instead of ‘tests’, BDD teams concentrate on behaviour and instead of ‘assertions’, they write method names like ShouldEqualFive() or ShouldRaiseNullException() which makes it easier to understand.
This simplification of the language makes the tests(it’s hard to get away from the word completely) not only a specification for the project, but also a sort of documentation and a validation tool that anyone(not only developers) can understand. This ease of use creates a ‘ubiquitous’ language for everyone in the development process – developers, stakeholders, managers and domain experts with specialist knowledge. The ease of communication that follows is very beneficial as it helps ensure that the team is building the right software – since the stakeholders understand it, they can just tell the devs that the specification is wrong. And what is even better is that it is often possible to get a sense of what the next most important part of the project is, thus reducing the time spent working on features that are less important and likely to change.

Syntax

To describe the different behaviours, BDD uses a story template and a scenario template. The story template has the following syntax:
As a [X]
I want [Y]
so that [Z]
where X is the role requesting the feature Y which brings the benefit Z to X. Each story also has a title, so that it can be referred to by name.
Now these stories are associated with scenarios which go as follows:
Given some initial context (the givens),
When an event occurs,
then ensure some outcomes.
And I don’t even need to disambiguate what this syntax means. This simplicity allows everyone associated with the project to join in the design and specification phase and the scenarios that end up being written later become the tests that the developers use to create the product.

BDD viewed as an Expert System?

I am totally new to Behaviour Driven Development(hence the question mark in the heading), but all of these contexts,events and outcomes bring me back to an AI course I took some time ago. More specifically, we(it was a 2-person project, so I’ll stick with the we) were asked to implement a fitness instructor expert system. An expert system is a computer system that emulates the decision-making ability of a human expert. Now, we knew close to nothing about the world of fitness instruction and yet we were tasked to create a product that given some input would come up with an appropriate exercise schedule. The key to the project was to go around real-life fitness experts and ask them about their viewpoints and based on their answers, to create a set of logical rules that describe their knowledge. For example, if a person wanted to lose weight, then he needed to do more cardio, but if he had back problems then he was advised to avoid certain exercises. This set of rules highly resembled the scenarios described above.
Now, we knew what we needed to build, but we were lacking the building blocks. The next part of the project was to create an ontology. According to Wikipedia, an ontology formally represents knowledge as a set of concepts within a domain, using a shared vocabulary to denote the types, properties and interrelationships of those concepts. Sound familiar? Let me give you an extract from the Wikipedia article on OOP: An object contains encapsulated data and procedures grouped together to represent an entity. The ‘object interface’, how the object can be interacted with, is also defined. An object-oriented program is described by the interaction of these objects. As you can see, these definitions are really similar. The last part required us to just write up the rules in a formal system, which would interpret them and give us the output.
This is why I believe that the development of an expert system can be compared to BDD. But an expert system is crafted in such a way as to act as if it is a real human expert in the field. And since BDD mimics the mindset of the stakeholders, then the final product will be an expert in the field of the requirements of the stakeholders. Thus, the project is more likely to behave according to their desires.

Conclusion

Behaviour Driven Development is, in its core, the same as Test Driven Development. However, it differs in the way that people think about the process. BDD lets people abstract to a more general level and allows input from all involved parties. This makes it an outside-in specification and ensures that the final product meets most of the requirements of the stakeholders, thus increasing the quality of the project.

References and Further Reading

Introducing BDD – the original post by Dan North
Comparative Study of Test Driven Development with Traditional Techniques
Interview with Dan North on Behavior-Driven Development
From Test-Driven Development to Behavior-Driven Development

Too Slow! Government IT project lacks Agility

Engineering large scale software successfully is an extremely difficult task, as such the majority of projects result in failures [1]. Recent times have seen the desire to offset this obvious imbalance through the emergence of Agile software development. The popularity of Agile design methodologies is ever increasing with more and more companies developing software using agile approaches such as scrum and crystal every day [2]. Despite this the success rate of large scale software engineering remains relatively low and big project failures still crop in as news stories every so often [3]. One of the most notorious examples of a very large scale agile development project failure has been the UK governments Universal Credit [4].

Universal Credit, that sounds boring!!

It is, Universal credit is the UK governments attempt to simplify the welfare system in some way by replacing six of the key mains tested benefits (Job Seekers Allowance, Child Tax Credit etc) with one payment [8]. The reform will require the support of a large scale IT system. It is hard to tell exactly what will be required from this system, ironically some areas of gov.uk were not accessible when I wrote this. However, I believe it will include some kind of web support for calculating eligibility and payment amounts. In any case the exact requirements of the software are not important for this article which is concerned with the development process used to build it.

Why single out Universal Credit?

There have been other high profile failures of large scale government engineering projects so why focus on Universal Credit. The theme of failure in Government IT projects is all too familiar, they have a history of running overtime and over budget. However, the government envisioned a different fate for Universal Credit through the use of agile development [5]. Unfortunately though, the development of Universal Credit has induced feelings of déjà vu by sharing two key characteristics with some of its predecessors: huge cost and late deployment [4].

Is Agile to blame?

The failure of Universal Credit has been very public and the reasons for it have been the subject of widespread debate. Agile development has been at the root of much of this debate in fact some people have claimed that Agile is at the root of the failure and should never have been used for this project [7]. While others suggest that Agile is not to blame as it was the victim of misuse and misunderstanding by the people involved with Universal credit [6].

Both of these viewpoints certainly have merit however I feel as though the first is really just a manifestation of the second. It is clear from reading numerous articles related to the agile development and the failure of Universal Credit that from the start there was confusion about the concept of agile development. For example, despite the intention to use Agile development the project was still constrained by elements of big design up front. This major contradiction to the principles of Agile development demonstrates an alarming lack of understanding. It appears as though in the context of this project the word agile has been thrown around for the sole purpose of encouraging vendors to deliver the system as swiftly as possible. In other words Agile should not be blamed for the failure of the project as it was never Agile in the first place. The reality is that the government’s attempts to use Agile development resulted in a kind of mutated waterfall model with accelerated implementation and testing phases.

Don’t trust the feds!!!

It could be said that no appropriate development methodology exists for these large scale government projects. Certainly the continued failure of projects such as Universal Credit and the even more notorious NPFIT could lead one to this conclusion. The levels of bureaucracy and public scrutiny involved in the development of such systems separates them from large scale private sector projects. Perhaps the politics involved impacts the development process to such an extent that we cannot apply the usual mainstream software engineering practices to these systems. If this is the case then there may be a need for someone to develop some guidelines or methodology for successfully delivering large scale public sector IT projects. However I feel that the development of such guidelines is unlikely as it would involve further governmental input and their track record is not encouraging.

It is also possible that the failure of large scale government IT programs is caused by a simple case of over ambition. Both Universal Credit and NPFIT attempted to provide systems which required the software to unify different approaches to problems in an enormous scale. In the case of Universal Credit there was an attempt to replace one on one sessions people had with councilors to evaluate their welfare needs with a software system. The problem with this kind of computing is that it tries to provide too much structure to something that is inherently chaotic. In some cases people interacting with people is still very important and in these situations we cannot expect software to successfully replace human interaction.

Political agendas could be at the route of the ambitious nature of government IT projects. Popularity among the general public is paramount to any government regardless of the particular party. One way in which governments can become and remain popular is if they are seen to be making significant and positives changes. This could lead to spawning of over ambitious large IT projects where perhaps a collection of smaller more subtle changes would be more beneficial. In any case it is clear the current practices process for developing government IT projects need to be assessed if disasters such as universal credit are to be avoided in the future.

References
[1]http://www.zdnet.com/blog/projectfailures/study-68-percent-of-it-projects-fail/1175
[2]http://www.infoworld.com/d/application-development/whats-wrong-agile-development-culture-people-top-the-list-213480
[3]http://www.cio.co.uk/insight/strategy/why-nhs-national-programme-for-it-didnt-work/
[4]http://www.computerweekly.com/news/2240185166/Universal-Credit-will-cost-taxpayers-128bn
[5]http://www.computerweekly.com/news/2240104931/DWP-adds-agile-development-into-IT-contracts-for-2bn-Universal-Credit-system
[6]http://blog.unicom.co.uk/the-universal-credit-project-a-failure-of-waterfall-project-management-or-a-failure-of-agile-methods/
[7]http://www.computerweekly.com/news/2240187478/Why-agile-development-failed-for-Universal-Credit
[8]https://www.gov.uk/universal-credit/overview