Exploratory testing(ET): It is an approach to software testing which involves learning of test design and test execution simultaneously.
Grounded Theory: It is a method by which a new discovery takes place by means of analysis of data.
Verification & Validation:
Validation means checking if the product meets the customers needs and verification means that whether or not the product compiles with regulation.
Test Oracle: is a method used to make a distinction between correct and wrong results during the process of software tesing.
Paper: Juha Itkonen, Mika V. Mantyla, Casper
Lassenius, “The Role of the Tester’s Knowledge in
Exploratory Software Testing “
IEEE Transactions on Software Engineering 11 Sept. 2012 . IEEE Computer Society PREVIOUS WORK:
The previous work shows that Exploratory Testing(ET) is being widely used by the software industry and there is a growing evidence that the industry testers see value in it. This
increasing interest in the software industry among the testers, in search for Exploratory testing, paves way for research questions in this domain.
There is no clarity in the way how Exploratory testing works and why it is being used?
What types of knowledge are being used in exploratory testing?
How do the testers apply the knowledge for testing purposes?
What are the possible failures that are being detected by exploratory testing?
A study was conducted under industrial set up where video was recorded for 12 testing sessions. In this approach the participants of testing (testers) were allowed to think while they were performing functional testing. The researcher occasionally asked for clarification about the testing process from the testers. After each session of testing process there were 30 minutes of interview session to discuss the results.
Grounded theory was being applied to find out what the testers thought and what type of knowledge was being utilised.
The way in which the testers found out the failures or errors by using their personal knowledge and without writing the test case descriptions is being discussed.
The knowledge that was being used in the process was classified as domain knowledge, system knowledge and general software engineering knowledge. It was found that the testers were using their knowledge as a test oracle to verify the correctness of the result and the knowledge was used as a guide in selecting the objects for test design
It was found that a large number of failures called windfall failures were found outside the focus area of testing and it was found using exploratory investigation.
The conclusion from the paper is that the approach used by exploratory testers clearly differs from the way test case based paradigm works.
A number of results have been found out from this set of experiments which are as follows.
* The testers were found to be spotting errors in the code based on their personal experience and knowledge without writing test case descriptions.
*Personal knowledge was equated to be the combination of system knowledge, domain knowledge and general software engineering knowledge.
*From the experiments performed it was found that the testers applied their knowledge.
*The failures that were found in the test process were mostly found incidentally. (i.e most errors were found outside of the system that was being checked upon).
*Failures were being classified on the inputs or conditions that interact with that failure.
*Failures related to domain knowledge were based straightforward.
*Failures related to system are related to software engineering knowledge and are difficult to provoke.
POINTS OF AGREEMENT:
I agree with the fact that this research work has taken a considerable amount of time to work upon with industrial collaboration and only experts have been employed with the research. So i agree with the results.
The research has used a lot of experiments(12) before jumping into conclusions. So i agree with the results produced .
20% of the failures found were windfall failures i.e these were found incidentally from the knowledge of the testers. I agree with the fact that exploratory testing helps in identification of failures in the code and provides new approach in finding defects
45% of inputs or conditions used in the process were found to be creating failures. I agree with this fact because a considerable amount of time and effort has gone in finding the results.
POINTS OF DISAGREEMENT:
*Although the experiment has found out that exploratory testers were fruitful in finding defects or failures by means of knowledge, serious threats are being imposed on testing domain as the term ‘knowledge used in testing’ has to be more clearly defined. The important questions are that is it possible for only experts to do this type of testing?
Is it possible to do effective testing only if the tester is having previous work experience?
The process of acquiring knowledge is not being clearly mentioned as to how a novice could attain this knowledge.
*The term knowledge is not clearly defined and the process of attaining it is also not clearly defined by the researchers. So i still wonder whether this approach of testing would be technically feasible to implement testing?
*Moreover the exploratory testing cannot be used in industries as a replacement to the existing testing approaches as it would be very costly and requires only experts with many years of experience.
*From the experimental observations it is said that nearly 20% of defects or failures were found incidentally which proves that exploratory testing is useful. My argument is that no other testing approach was used for the experiment other than exploratory testing. So naturally with the usage of experts 20% of defects were found. If the exploratory testing is done as a comparative study with any other approach of testing then we could have a clear cut superior method of testing.
The fact to be noted is that exploratory testers have more knowledge but can the method of exploratory testing be sufficient enough to function independently as a method of testing owing to its costly nature of requiring experts, time frame etc., are posing serious questions.
The research has taken place with experts and has taken place under controlled manner of producing the necessary results but it has failed to answer the question of defining knowledge of exploratory testers and how to gain the knowledge.
Further research could be a comparative study of exploratory testing with prevalent methods of testing to identify its effectiveness.
 G.J. Myers, The Art of Software Testing. John Wiley & Sons, 1979.
 B. Beizer, Software Testing Techniques. Van Nostrand Reinhold, 1990.
 C. Kaner, J. Falk, and H.Q. Nguyen, Testing Computer Software.
John Wiley & Sons, Inc., 1999.
 L. Copeland, A Practitioner’s Guide to Software Test
House Publishers, 2004.
 J.B. Goodenough and S.L. Gerhart, “Toward a Theory of Test Data
Selection,” IEEE Trans. Software Eng., vol. 1, no. 2, pp. 156-173, Mar. 1975.
 C. Andersson and P. Runeson, “Verification and Validation
Industry—A Qualitative Survey on the State of Practice,” Proc.
Int’l Symp. Empirical Software Eng., pp. 37-47, 2002.
 S. Ng, T. Murnane, K. Reed, D. Grant, and T. Chen, “A
Preliminary Survey on Software Testing Practices in
Proc. Australian Software Eng. Conf., pp. 116-125. 2004,
 E. Engstro¨m and P. Runeson, “A Qualitative Survey of Regression
Testing Practices,” Proc. 11th Int’l Conf. Product-Focused Software Process Improvement, 2010.