Behaviour Driven Development: Bridging the gap between stakeholders and designers

Introduction

It is obvious that when a large system is being developed, no matter the domain it functions within, many things can go wrong. Different applications will focus more on not getting things wrong in different aspects of the system. But the most common reason behind mistakes is the failure to communicate. In a smaller scale, this will can occur among the developer team or between developers and testers. But the larger the project, the more the people involved and the different aspects of successful delivery they need to address. The tree swing metaphor is a very popular and elegant (and perhaps a bit exaggerated) way to demonstrate this. It might be impossible to ever achieve a harmonious communication system across all teams and stakeholders involved, but there is a way to somewhat bridge the gap between the two main actors in a system’s development: the client and the engineering team.

The classic tree swing metaphor

The tester’s Problem: what do I test?

In an ideal world, the process of delivering requirements seems fairly simple and straightforward. Using a generalized Agile framework as an example to simplify discussion, the process is supposed to follow these steps: The client dedicates a team to creating user stories that should cover the product’s requirements and look a little something like this: “As an X, given that Y,  I want to be able to do Z”. The program manager of the development team receives the user stories and creates more specific requirements (use cases) for each of the stories that look like this “Enable user authentication” or “Implement functionality Z”. The program manager will then get together with the developers and testers of the team at the beginning of the sprint and break down the use cases to very specific tasks like “Implement client side authorization cookie”, “write fuzz tests for user authentication” etc.  Each task is assigned to a developer or tester appropriately and then everyone gets down to work. During the sprint duration the devs and testers will mark their tasks as done and at the beginning of the next sprint the process will be repeated depending on completed work so far.  It all sounds nice and easy, right? Well no. One of the first problems that the engineering team will face, is implementing the client’s user stories. Even though large corporate clients will usually have a team dedicated to defining the user stories appropriately, it is extremely common that some aspects will be left out.

The problem is that clients usually think in plain English just like most users will when using the service. Engineers  and testers on the other hand think in code (hopefully). Using a streaming service as a more specific example, what happens when the client writes the user story “as a service subscriber I want to click on the poster for movie X and is launched in the same window”. When the engineering team gets down to creating tasks the following questions can be raised when it comes to testing the story:

  • What if the user has another window open playing movie Y? Should they be able to stream multiple movies at once?
  • What if the user’s subscription does not allow them to watch the movie? Should they be simply shown a “we’re sorry” message? Or should they be guided to a page that will allow them to extend their subscription?
  • If the user clicks “open in new tab” should the browser just launch the video player or the same page with the player embedded to allow deep linking?
  • If the user had started watching the movie earlier closed the browser in the middle, should the movie start over or continue from where it was left off?

Questions like these might of course be covered by different user stories. But very often they’re not. Very often unanswered questions will block development and the program manager will need to get back to the client and wait for an answer. Besides time issues, this can also cause frustration on both sides and  harm the relationship with the client. The problem can also go both ways when the engineering team attempts to explain their testing scenarios and the client does not understand or does not find use in them.

Wouldn’t it be grand if communication of this type could be simplified?

Enter BDD: a middle-ground framework

During my year as a Software Development Engineer in Test at  a  large development team, we worked on a v1.0 streaming service that was partly tailored to the first client’s requirements.  Problems like the one described above were more than common. This was the first time the team was working using agile methodology and releasing a system that would follow the brave new world of continuous delivery instead of the conventional develop -> ship -> support timeline. So when trouble like this appeared, it really set the schedule behind. I would often receive a testing task, write the automation code and then publish it for review. A developer would then tell me I should also test case X which I had left out. I would then publish the 2nd version of my review and a different developer would come along and comment that this is not a requirement and might change soon so I should remove the test. That would lead to a lengthy discussion that would start from the comment thread in the review and then continue off line in the office until it reached a deadlock and had to be raised to the PM. The rest is history. Until one day, a senior tester proposed implementing BDD to our testing framework.

BDD is really just a fancier TDD framework, build to accommodate both the engineering and the client side. The difference is that the automated test methods are wrapped in a simple natural language parser that can read test scenarios written in plain English. In the streaming service example, the client would be asked to write the story in the following format:

Given I am logged in to the service
When I click on a movie
Then the movie will open within the window

The engineering team then builds a framework where the code looks for the Given – When – Then keywords which trigger methods that setup the testing environment as described in Given, run the steps described in When and make assertions that are given in Then.

This description of course doesn’t solve anything since it has the exact same level of ambiguity as the original story. But when the engineering team spots a constraint that should be added to the testing scenarios, they can extend the language definition and give it back to the client. A more specific and correct example of a test scenario would be:

Given I am logged in to the service
and I have access to “all content”
and I am “not” currently streaming another movie
and I had started watching that movie earlier
When I “left-click” on “movie X”
Then the movie will open “within the window”
and the movie will play “from where I had left it off”

The and clauses simply run more setup methods before the test is run and eventually make more assertions one the automation has been finished. The code behind will look something like this:

public class StreaminClientSteps
{
 private readonly clientPage;

 [Given(@"I am logged in to the service"]
 public void GivenLoggedInUser
 {
 ....
 }

 [Given(@"I have access to '(.*)'"]
 public void GivenAvailableContent(string content)
 {
 ....
 }

 [Given(@"I am '(.*)' currently streaming another movie")]
 public void GivenCurrentlyStreaming(string streaming)
 {
 ....
 }

 [When(@"I '(.*)' on '(.*)'")]
 public void GivenCurrentlyStreaming(string clickType, string movie)
 {
 clientPage.ClickMovie(clickType, movie);
 }

 [Then(@"the move will open '(.*)'")]
 public void AssertMovieOpened(string requestedWindow)
 {
 ....
 } 

 [Then(@" the movie will play '(.*)'")]
 public void AssertMoviePlayedFrom(string time)
 {
 ....
 } 
}

This is just a simplified example, and the definitions in the text might need to be more specific, but it shows how sentences can be reused to test different conditions.

Conclusion: Why it works

“So we spent a massive overhead to build this framework in order to run test methods that we would run anyway and the only difference is that they’re written in plain English. Why the hell did we do that?” a skeptic tester might ask. Since introducing the framework is a lengthy task, it needs to become a user story of its own and take some time from testers or developers until it’s implemented. In the long term though, here are some benefits that this framework will introduce.

  • It reduces testers’ need for initiative. The testers simply have to implement methods every time a new condition, automation or assertion is introduced without wondering what the test will actually do. When an untested occasion appears, they simply suggest implementation and continue with it if accepted.
  • It facilitates the client’s in identifying potential problems. Once they start receiving untested conditions, the clients will get accustomed to writing acceptance tests that will cover ever possible aspect of a situation and then assertions that need to be made.
  • It pushes for better code standards. The conversion from plain English forces the testers to write simple methods that only do one thing; that’s one of the basic criteria of “good code” which is often not followed by testers since their code is not included in the product code.
  • The need for new code is gradually reduced. New conditions will of course keep arising but once the main functionality testing has been implemented, hundreds of test can be quickly made by simply changing values in the Given – When – Then clauses.

And last but not least:

  • It solves the communication problem. Not completely. There will always be some friction between engineers and stakeholders that can originate from many stages of the development cycle. But when it comes to acceptance testing, using a common language removes the need of translation by the engineers and the other way. It makes clear to both sides what needs to be done.

 Sources:

  • http://dannorth.net/introducing-bdd/
  • http://cukes.info/
  • https://github.com/cucumber/gherkin/wiki
  • http://www.specflow.org/

2 thoughts on “Behaviour Driven Development: Bridging the gap between stakeholders and designers”

Comments are closed.