Introduction
This post is a response to s1368322’s post “Why Testing Is Not Search Of Bugs“. The post in question discusses testers seemingly as a different entity from developers. As such, this response is written on that basis.
In their post, the author makes the case that the main purpose of testing is to miss as few bugs as possible. Whilst, I can agree with this general point, I would adjust that slightly to say that the main purpose of testing is to miss no bugs and also expand that to say that testing is also to ensure that developed software meets its requirements and provide a level of assurance to the client or user.
However, I believe that one of the premises that the post is built on, that there is a difference in testing approach depending on the size of the project, is wrong.
Small-scale vs. Large-scale – is there really a difference?
The author makes the case that the testing procedures and objectives are different between ‘large-scale’ and ‘small-scale’ projects. This is an argument that I have been unable to agree with. For any development with a formal testing phase, the same procedures should take place with the same ultimate objectives. In both cases, the testing phase and subsequent test cases should be designed to cover all available functionality within the system. The ultimate aim of any testing phase is to provide a development that is as defect-free and reliable as is possible.
Small-scale
The author makes the assertion, for small-scale projects, that the “vast majority of testers are convinced that testing” is a “search of bugs”. Again, I would argue that this is not necessarily the case. It may indeed be the case when discussing developers carrying out unit testings. However, I believe that if dedicated testers are used at the system/acceptance testing phase then most of these testers (or prospective users) are aware that their main role is to ensure that the software released meets its requirements and works as expected.
The author goes on to argue that testers like to find defects as it is a “visual representation of work done by them”. I would agree that there is a sense of satisfaction, on the part of the tester, by finding a defect, particularly an important one. However, if a development was to pass through testing without a single bug being found, most testers would not see this as a failure on their part or an indicator that they had not been working effectively. As long as no defects passed through the testing phase unidentified, I believe that the majority of testers would see this as a successful test phase. It is possible for test phases to be considered outstanding successes with very few defects identified. I would argue that the success of a testing phase is more accurately considered in terms of code and functionality coverage and the proportion of the test cases completed within the testing phase.
The author then goes on to suggest that the ‘least stable’ parts of a development would be tested first. Again, unfortunately, I disagree. When a development is passed to the testing phase, there should be no parts of the system considered less stable that others, particularly on smaller projects. It may be true that certain areas of the test cases may be prioritised but these would tend to be on importance of functionality or complexity of code rather than areas that are less stable (i.e., risk-based testing).
Large-scale
When the author begins to talk about the testing in large-scale projects, I agree with the points made more. In fact, some of the assertions reflect the arguments made above for small-scale developments reinforcing the belief that ‘testing is testing’ irrespective of how large or small a development is.
However, I disagree with the statement “Anyway, there can be a problem with existing functionality, and in most cases, this functionality is not tested properly”. Whilst it would be foolish to suggest that the existing software being integrated into will always be tested fully, in most projects that I have worked in or been aware of, when the need arose, there was a dedicated period of regression testing to ensure that no defects had been introduced to the pre-existing functionality.
The author also states that most of the testing will be completed on parameters that can be expected in normal use. This is a reasonable statement. However, the claim is made that testing non-standard scenarios before all basic features are completed can be inefficient and waste time. This can be true but only within the testing of components. As an example, if a financial software package was being tested, whilst non-standard inputs for a tax calculation may not be tested first, it would make sense to test them whilst testing that tax calculation rather than moving onto another component and coming back to it when everything else had been completed.
Changing the approach?
In the final section on how to change the approach, there is much to agree. An effective test phase is dependent on well-specified and defined test cases which cover all possible aspects of the functionality and understand how the software will be used after release. Documentation of test cases is also important for communication within the team, with the developers, management and as reassurance for the client. It is also important to reflect on the test phase and learn from any mistakes or omissions.
Conclusion
As discussed, my greatest disagreement with this post is the assertion that there is a fundamental difference between testing large and small-scale projects and the testing objectives and procedures for small-scale projects. The author is correct in how they propose that the testing approach needs to change but I would argue that the proposed practices are already in use in most competent test teams.