The article https://blog.inf.ed.ac.uk/sapm/2014/02/11/do-you-have-testers argues that the role of the tester is outdated in modern software development, mainly due to its agile nature of continuous delivery. Clemens Wolff, the writer of the article in question, begins by describing an “instant remedy” method to evaluate a software development team, “The Joel Test” as well as other articles by Joel Spolsky that explain why testing is valuable. He then proceeds with countering the opinion for the need for testers by firstly describing a personal experience that led him to this belief and then pointing out several factors in the evolution of software that have rendered testing unnecessary.
Though Clemens’s article makes a few good arguments on how the nature of testing has radically changed, I remain unconvinced that testing as a separate practice is dead and unnecessary. In this article I will try and refute that argument by pointing out some of the points Clemens makes and why they don’t necessarily point to one certain conclusion. I will also briefly describe my own personal experience in the software industry as a tester to demonstrate that testing can be many different things and should not be easily dismissed as a whole.
The writer’s personal experience seems to be a main factor in his opinion being formed, and he gives a fairly good explanation of his team’s methodology and how it was effective without the use of dedicated testers. I will by no means try to refute that the Amazon Development Centre ships bad code but I can’t help but ask the question: What code does it ship? There is no context given about the product the team is working on, its intended users or anything else for that matter, besides their methodologies on ensuring software quality. Whatever their domain, even if these methodologies were indeed “successful” by some acceptance metric, I have no reason to believe they would be successful in a different team.
Furthermore, some of the points made, in my opinion, further supported Polsky’s statements that the writer is trying to refute. For example, one of the steps taken to assure software quality by the writer’s team was “Unit and integration tests written for any new feature”. If the developers write these tests themselves, then Spolsky’s point that “A team without dedicated testers wastes money” definitely holds within that team. Another one of Spolsky’s points is that programmers make bad testers since they look at the code from the writer’s point of view and trying to think of ways to brake your own code can be difficult.
QA vs Testing
A main weakness of this article is that it seems to blend the job of QA and testing into one. Plenty of articles have been written that argue about the differences of the two, but they all agree on one thing: they’re different. Quality assurance encapsulates all the processes involved in evaluating not just the product, but also the development and/or maintenance processes used by the team. These can include evaluations like “The Joel Test”, the inclusion of documentation, team communications tools and of course testing. If the writer is trying to argue that testing should not be viewed as part of QA but part of the development process then I wholeheartedly agree with that. But that is not made clear in the article.
Testing is not just using a user interface until something doesn’t work and then reporting it. Testing is in its core software development. The difference is that the software developed by testers serves a very different purpose to the software developed by the developers. The test code is by itself a whole different project (usually in terms of code as well) which is used to test parts of the product code. A tester is required to employ a very different way of approaching a problem which is why it is better to have testers as a separate unit; so that the developers don’t have to worry about it. It makes both the developer’s job easier, and ensures that the job done by the tester will be superior to that done by a developers/tester.
Testing in an agile environment
Another point that the author makes, is that the new world of agile has reduced the need for testers because of its continuous delivery nature. It’s true that web-based services make a-b user testing much easier than the old “software in a disc” paradigm, due to the ease of releasing different versions to small chunks of the user base. But user-acceptance tests are only a small part of testing. A service’s user will usually neither try and breach a new feature’s security nor will she expect that a certain input value might surface certain edge case problems. A user will simply expect the service to work and maybe give feedback on how it could work better.
Furthermore, in order to utilize a service’s user base to conduct insightful user testing, the service needs to firstly establish a user base. And until that is done, there is no way of knowing if the 1.0 version of that web service will work adequately without a testing team.
My personal experience
I spent a year working for Microsoft Mediaroom as a Software Development Engineer in Test. Just like with Amazon Development Center, most people would expect that the team functions in a high quality software development environment. During my internship, I worked on the delivery of a 1.0 version web service which would for the first time transfer Mediaroom’s capabilities to the cloud. We followed an agile methodology with small scrum teams (10-12 people) and the ratio of testers and developers was usually one to one. Here are a few reasons why testing was necessary in our team and why it did not hinder the agile development process.
- Testers and developers worked closely. Though it was clear who was a tester and who a developer, the whole team participated in all decision making. Testers reviewed code written by developers and vice versa. If a tester found a bug, it was not necessarily a process of filing the bug and then waiting for it to be fixed, but often a solution could be found with a simple face to face chat. This ensured that both teams had good knowledge of what the other team was doing.
- The client’s requirements. The product (or service) is B2B. Selling a service to another business and not to the end customer, usually means higher acceptance constraints. It might be OK for a single end user to identify a bug that can be fixed directly from the developer, but the last thing our team wanted was forcing a client to admit that a fault with their service was not due to their processes but due to the service they have acquired from a different company.
- Testing did not block development. Writing tests for a new feature was considered a separate task to developing the feature. If the developer finished the product code near the end of a sprint cycle, then the testing task would simply be carried over to the next sprint and the developer could start working on a different feature while the tester was taking care of the previous one.
- The product’s domain. The service was a VOD CMS that needed to be able to handle the whole service of uploading content to the cloud, encoding it in multiple formats and then distributing it to multiple different software clients (mobile, tablet, web, etc). Because of its end-to-end nature, the product had a big variety of testing that needed to be performed in an automated manner like load testing, API testing, component testing, user interface testing (for both the content manager and the end user) etc. The process of creating automated testbeds for all these processes is highly complicated and needs a design of its own. Developers should not have to worry about how the testing will be designed, but only how their code will pass the tests.
- No user base. Since this was a v1.0 product, no there were no users available for user testing. Some clients would give feedback on product features but it was impossible to know the overall reaction once the product was finally released. Furthermore, since Microsoft would not be handling the end user’s experience eventually, end-user testing would never be 100% efficient from our team’s point of view.
“Testing is dead” is a statement that should not be thrown around with ease. “Testing has changed” is of course true just like every other aspect of software development. New agile methodologies have changed the nature of testing but if anything, they’ve made it more interesting and complicated. How a service will be tested depends on a multitude of factors like the user group, the team’s capability and of course the product itself. There are definitely services which because of their nature can be made available with a few bugs that will not pose security or other serious risks. But there will always be cases where software needs to be heavily tested before it is made available to the public, and the best people to do that are professionals who have dedicated their time in the sole purpose of improving testing technologies and methods.
- “Do you have testers?” –https://blog.inf.ed.ac.uk/sapm/2014/02/11/do-you-have-testers/
- Joel Spolsky’s articles:
“The Joel Test” – http://www.joelonsoftware.com/articles/fog0000000043.html
“Why Testers?” – http://www.joelonsoftware.com/items/2010/01/26.html
“Top Five (Wrong) Reasons You Don’t Have Testers” – http://www.joelonsoftware.com/articles/fog0000000067.html
- On QA vs Testing