This is a response article to the article “(Re)Estimation: A brave man’s task January 27, 2014 Uncategorized Panos Stratis ” done by Eshwar Ariyanathan S1340691

I am including my agreeing and disagreeing points
for the article. “(Re)Estimation: A brave man’s task January 27, 2014 Uncategorized Panos Stratis ”
Points of Agreement:
*I agree with the fact that a project manager has to take
decisions that abides by companies policy and upper
management.
* I agree with the fact that adding new programmers
later on in the process of software lifecycle increases the
complexity and timing.
* I agree with the fact that the project managers should
be realistic in estimating the timeframe for the work to be
done and meeting deadlines.
*I agree with the fact that the project manager plays a
very crucial role in resource allocation, time frame and
completion of the project.
Points of Disagreement:
*I disagree with the fact that if more programmers
are involved in a task total time increases
exponentially and they don’t always have common
point. My opinion on this statement is that on
involving more programmers on a task the work
gets split and its easy to finish tasks on time. New
ideas can emerge from any programmer in the team
which might help in finishing the tasks quickly.
For eg consider that an application has 100
functions and we have 10 programmers. Each
programmer has to do only 10 functions. Work gets
split and happens quickly.
* If programmers are working towards a common
goal then any argument being raised will only be for
the betterment of the project and not creating
ambiguity. So arguments raised by the
programmers during the project discussion should
be viewed as a positive strategy and not as time
wasting process.

The Role of the Tester’s Knowledge in Exploratory Software Testing, done by s1340691 Eshwar Ariyanathan

Key Terms:

Exploratory testing(ET): It is an approach to software testing which involves learning of test design and test execution simultaneously.

Grounded Theory: It is a method by which a new discovery takes place by means of analysis of data.

Verification & Validation:

Validation means checking if the product meets the customers needs and verification means that whether or not the product compiles with regulation.

Test Oracle: is a method used to make a distinction between correct and wrong results during the process of software tesing.

Paper: Juha Itkonen, Mika V. Mantyla, Casper

Lassenius, “The Role of the Tester’s Knowledge in

Exploratory Software Testing “

IEEE Transactions on Software Engineering 11 Sept. 2012 . IEEE Computer Society PREVIOUS WORK:

The previous work shows that Exploratory Testing(ET) is being widely used by the software industry and there is a growing evidence that the industry testers see value in it. This

increasing interest in the software industry among the testers, in search for Exploratory testing, paves way for research questions in this domain.

RESEARCH QUESTIONS:

There is no clarity in the way how Exploratory testing works and why it is being used?

What types of knowledge are being used in exploratory testing?

How do the testers apply the knowledge for testing purposes?

What are the possible failures that are being detected by exploratory testing?

RESEARCH WORK:

A study was conducted under industrial set up where video was recorded for 12 testing sessions. In this approach the participants of testing (testers) were allowed to think while they were performing functional testing. The researcher occasionally asked for clarification about the testing process from the testers. After each session of testing process there were 30 minutes of interview session to discuss the results.

Grounded theory was being applied to find out what the testers thought and what type of knowledge was being utilised.

The way in which the testers found out the failures or errors by using their personal knowledge and without writing the test case descriptions is being discussed.

The knowledge that was being used in the process was classified as domain knowledge, system knowledge and general software engineering knowledge. It was found that the testers were using their knowledge as a test oracle to verify the correctness of the result and the knowledge was used as a guide in selecting the objects for test design

It was found that a large number of failures called windfall failures were found outside the focus area of testing and it was found using exploratory investigation.

The conclusion from the paper is that the approach used by exploratory testers clearly differs from the way test case based paradigm works.

RESULTS:

A number of results have been found out from this set of experiments which are as follows.

* The testers were found to be spotting errors in the code based on their personal experience and knowledge without writing test case descriptions.

*Personal knowledge was equated to be the combination of system knowledge, domain knowledge and general software engineering knowledge.

*From the experiments performed it was found that the testers applied their knowledge.

*The failures that were found in the test process were mostly found incidentally. (i.e most errors were found outside of the system that was being checked upon).

*Failures were being classified on the inputs or conditions that interact with that failure.

*Failures related to domain knowledge were based straightforward.

*Failures related to system are related to software engineering knowledge and are difficult to provoke.

POINTS OF AGREEMENT:

 I agree with the fact that this research work has taken a considerable amount of time to work upon with industrial collaboration and only experts have been employed with the research. So i agree with the results.

 The research has used a lot of experiments(12) before jumping into conclusions. So i agree with the results produced .

 20% of the failures found were windfall failures i.e these were found incidentally from the knowledge of the testers. I agree with the fact that exploratory testing helps in identification of failures in the code and provides new approach in finding defects

 45% of inputs or conditions used in the process were found to be creating failures. I agree with this fact because a considerable amount of time and effort has gone in finding the results.

POINTS OF DISAGREEMENT:

*Although the experiment has found out that exploratory testers were fruitful in finding defects or failures by means of knowledge, serious threats are being imposed on testing domain as the term ‘knowledge used in testing’ has to be more clearly defined. The important questions are that is it possible for only experts to do this type of testing?

Is it possible to do effective testing only if the tester is having previous work experience?

The process of acquiring knowledge is not being clearly mentioned as to how a novice could attain this knowledge.

*The term knowledge is not clearly defined and the process of attaining it is also not clearly defined by the researchers. So i still wonder whether this approach of testing would be technically feasible to implement testing?

*Moreover the exploratory testing cannot be used in industries as a replacement to the existing testing approaches as it would be very costly and requires only experts with many years of experience.

*From the experimental observations it is said that nearly 20% of defects or failures were found incidentally which proves that exploratory testing is useful. My argument is that no other testing approach was used for the experiment other than exploratory testing. So naturally with the usage of experts 20% of defects were found. If the exploratory testing is done as a comparative study with any other approach of testing then we could have a clear cut superior method of testing.

 The fact to be noted is that exploratory testers have more knowledge but can the method of exploratory testing be sufficient enough to function independently as a method of testing owing to its costly nature of requiring experts, time frame etc., are posing serious questions.

CONCLUSION:

The research has taken place with experts and has taken place under controlled manner of producing the necessary results but it has failed to answer the question of defining knowledge of exploratory testers and how to gain the knowledge.

Further research could be a comparative study of exploratory testing with prevalent methods of testing to identify its effectiveness.

REFERENCES:

[1] G.J. Myers, The Art of Software Testing. John Wiley & Sons, 1979.

[2] B. Beizer, Software Testing Techniques. Van Nostrand Reinhold, 1990.

[3] C. Kaner, J. Falk, and H.Q. Nguyen, Testing Computer Software.

John Wiley & Sons, Inc., 1999.

[4] L. Copeland, A Practitioner’s Guide to Software Test

Design. Artech

House Publishers, 2004.

[5] J.B. Goodenough and S.L. Gerhart, “Toward a Theory of Test Data

Selection,” IEEE Trans. Software Eng., vol. 1, no. 2, pp. 156-173, Mar. 1975.

[6] C. Andersson and P. Runeson, “Verification and Validation

in

Industry—A Qualitative Survey on the State of Practice,” Proc.

Int’l Symp. Empirical Software Eng., pp. 37-47, 2002.

[7] S. Ng, T. Murnane, K. Reed, D. Grant, and T. Chen, “A

Preliminary Survey on Software Testing Practices in

Australia,”

Proc. Australian Software Eng. Conf., pp. 116-125. 2004,

[8] E. Engstro¨m and P. Runeson, “A Qualitative Survey of Regression

Testing Practices,” Proc. 11th Int’l Conf. Product-Focused Software Process Improvement, 2010.

Comparing the Defect Reduction Benefits of Code Inspection and Test-Driven Development ,done by s1340691 Eshwar Ariyanathan

Introduction to key terms :

1)Test Driven development(TDD) : It is a agile software development methodology where the test cases are written first before the actual code gets written. This enables the developer to re factor the code and find bugs in the code in the initial stage of the development process.

2)Code Inspection(CI): It is a process of software inspection where experts analyse the source code to look for any bugs or errors in the source code and then rework is done by developers to correct the source code.

Authors:

Wilkerson, J.W.

Sam & Irene Black Sch. of Bus., Pennsylvania State

Univ., Erie, PA, USA

Nunamaker, J.F., Jr. ; Mercer, R , “Comparing the Defect Reduction Benefits of Code Inspection and Test-

Driven Development,” IEEE Transactions on Software

Engineering, vol. 38, no. 3, pp. 547-560, May-June

2012, doi:10.1109/TSE.2011.46

PREVIOUS WORK:

1) The previous research work shows that the code inspection has been researched on a lot of scenarios and industrial applications and has been shown to have superior results in terms of testing practices.

2) The scientists working on agile methodologies have a claim that Test Driven Development is a better approach of testing.

RESEARCH QUESTION:

To find out whether Test Driven Development(TDD) is a better approach or Code Inspection(CI) is a better approach of testing.

SUMMARY OF THE RESEARCH DONE:

The experiment was based on a programming assignment on spam filter in Java which was done by 40 undergraduate students. The students were divided into four groups for testing namely TDD group, CI group, TDD+CI group and last group which neither used TDD not CI for testing.

The result on final analysis taken into account had only 29 students and the others left before the assignment was over.

Code inspections were also performed by the same group of students using online collaborative tool. The students were given training for coding using Junit before the experiment was conducted.

The total defects found by TDD group and CI group were compared and analysed for producing the experimental results.

RESULTS PRODUCED FROM EXPERIMENTS:

*Code inspection was better at reducing defects when compared with Test driven development

*Code inspection + TDD was producing better results but not statistically .

*Code inspection was slower than TDD approach in finding the defects.

*TDD has to clearly defined as the procedure takes in a lot of comparisons and variations everyday.

DISAGREEING ARGUMENTS:

I have a lot of disagreeing points regarding the experimental results and the method employed for the experiment.

Firstly , the experiment concludes that Code inspection is better than TDD in terms of defect reduction.

I do not agree with this point because it has been given with just one programming assignment which is less than 600 lines of code and there are no sufficient justifications as to why Code inspection is better.

Secondly, the experiment is done with students of a university with varying levels of ability.

So my claim is that how can we accept a result from people who are neither experts in the field nor do they have done sufficient research to prove the validity of the experiment.

Thirdly, the time taken for the experiment is a week (less than a month in research standards.) When the time taken for an experiment is very less then jumping to conclusions is not at all acceptable.

My next point is that regarding test driven development the students who performed the experiment were just beginners.

They were just given some tutorial and lectures regarding JUnit before the experiment was performed. Clearly the students involved in the experiment were not experts in testing using JUnit.

My next argument is that the result states that Code inspection was able to detect 23% defects compared to 11%defects of TDD. But they have not mentioned as to what type of defects were being found out by the TDD group or the Code inspection group.

The other important point to be noted is that the students involved in the experiment were of varying levels of ability with respect to programming and methods of testing. So the deviation in results produced might have also happened because the students doing code inspection were better

testers compared to that of those who did Test driven development.

So the results from a small experiment done with people who were not experts and done in very less time without any background and without any clear set of parameters cannot be considered as a valid result.

But this experiment could be considered as a spark so that industry could take on with further research in this field to prove the effective approaches of testing.

CONCLUSIONS:

My conclusions are that the research working regarding this testing using should proceed with testing experts both in TDD and code inspection and should be done in controlled manner with sufficient amount of time.

The future research should also take in to account of different parameters involved with testing and on varying lines and complexity of code and then come to conclusion in finding a superior method of testing.

So as far as this research paper is concerned i am not agreeing with either the approach or the results obtained.

REFERENCES:

[1] G. Tassey, “The Economic Impact of Inadequate Infrastructure for Software Testing,” technical report, Nat’l Inst. of Standards and Tech nology, 2002.

[2] B. George and L. Williams, “A Structured Experiment of Test-Driven Development,” Information and Software Technology, vol. 46, no. 5, pp. 337-342, 2004.

[3] E.M. Maximilien and L. Williams, “Assessing Test-Driven Development at IBM,” Proc. 25th Int’l Conf.

Software Eng., pp. 564-9, 2003.

[4] D.L. Parnas and M. Lawford, “The Role of

Inspections in Software Quality Assurance,” IEEE Trans. Software Eng., vol. 29, no. 8, pp. 674-676, Aug.

2003.

[5] F. Shull, V.R. Basili, B.W. Boehm, A.W. Brown, P.

Costa, M. Lindvall, D. Port, I. Rus, R. Tesoriero, and M.

Zelkowitz, “What We Have Learned about Fighting Defects,” Proc. Eighth IEEE Symp. Software Metrics, pp. 249-58, 2002.

[6] M.E. Fagan, “Design and Code Inspections to Reduce Errors in Program Development,” IBM Systems J., vol. 15, no. 3, pp. 182-211, 1976.

[7] N. Nagappan, M.E. Maximilien, T. Bhat, and L.

Williams, “Realizing Quality Improvement through Test Driven Development: Results and Experiences of Four Industrial Teams,” Empirical Software Eng., vol. 13, no.

3, pp. 289-302, 2008.