A Most commonly asked question in software testing is: “is that enough testing, or should we do more?” Regardless of whether you’re writing unit tests for your programs or finding bugs in closed-source third-party software, recognizing what code you have and have not secured is a significant snippet of data. If you are new to software industry, first of all go through the types of testing.
How much testing is enough?
It is possible to do enough testing but deciding how much is enough is difficult. Simply doing what is planned is not sufficient since it leaves the question with respect to how much should be planned.
What is enough testing can only be affirmed by evaluating the results of testing. If lots of faults are found with a set of planned tests then, more tests will be required to guarantee that the required level of software quality is achieved. On the other hand, if very few faults are found with the planned set of tests, then no more tests will be required. Saying that enough testing is done when the customers or end-users are happy is a bit late,despite the fact that it is a good measure of the success of testing. However, this may not be the best test stopping criteria to use if you have very demanding end-users who are never happy! Why not test when you have demonstrated that the framework works? It is not possible to prove that a system works without exhaustive testing.
When is it Enough Testing?
When would we be able to state that this much software testing is sufficient? Can testing ever be finished?
To answer these questions, we have to analyze testing activities from start to end. Please note that – Here we are going to define these activities in a lay man’s terms – Not in a fancy technical way.
Let’s consider you are starting a testing of a new project.
- The testing team gets requirements.
- Testing team starts planning and designing.
- Initial Test documents are ready and reviewed.
Testing Step 1-
- After receiving the developed product, the testing team starts test execution.
- During the testing stage, they execute different situations so as to break the software and discover numerous deformities. The Defects get fixed by developers and returned back to test team for retest.
- The testing team performs retesting of the defects and executes regression.
- When most of the high severity defects are resolved and the software looks stable, development team releases the next version.
Testing Step 2-
- The testing team starts the next step of testing and similar activities are executed as step 1.
- In this process during the second step of testing, more defects are captured.
- The Defects get fixed by developers and returned back to the test team for a retest.
- Testing team re-tests the defects and performs regression.
This can continue until all defects in the software are found and software becomes bug-free.
From the above steps, we can clearly conclude that testing can continue until all defects in the software are found.
But here the question is – Is it possible to find every single defect in the software? Let’s try to find the answer for this.
Stopping when all defects are found: Is it possible?
Most of the software is complex and has a huge testing scope. It is possible to find all defects in the software but it will take more time. Even after finding many defects in the software, no one can ensure that the software is defect free now. There cannot be a case where we can confidently say that we have finished testing and it does not have any more errors.
In addition, the purpose of testing is not to find each and every defect in the software. But the intent of software testing is to prove that the software works as intended by breaking it or finding deviation between its current behavior and expected behavior. There are unlimited defects in software and hence it is not practical to test it until all defects are found because we never know which defect is the last one. The fact is, we cannot rely on finding all the defects in the software to conclude our testing.
Which are the important factors to be considered while concluding testing activities? The decision to stop testing is mostly dependent on Time, Budget and Extent of Testing.
One can stop the testing when:
- 100% Requirements coverage is achieved.
- Defined defect count is reached.
- All defects or Blockers are fixed.
- All High Priority defects are identified and fixed.
- Defect Rate falls underneath defined acceptable rate.
- Very less Medium Priority defects are open.
- Very less low priority defects are open that do not affect software usage.
- All High Priority defects are re-tested and closed and corresponding Regression scenarios are successfully executed.
- Test Coverage should be 95% achieved.
- Test case Pass Rate should be 95%. This can be calculated by formula( Total No of TCs Passed / Total number of TCs ) * 100.
- All critical Test cases are passed.
- 5% Test cases can be failed but the Failed Test cases are of low priority.
- Complete Functional Coverage is achieved.
- All major functional / business flows are executed successfully with various inputs and are working fine.
- Project Deadline or Test Finish deadline is reached.
- All Test Documents are prepared, reviewed and published across.
- Complete Testing Budget is exhausted.
From the above discussion you can conclude that finishing the testing is dependent on the requirements achieved, defects found, test coverage, deadlines and budget of the project.
Are you looking to develop a software that will represent your business? Experts at solace are there to help you to develop and manage the software development with high quality testing. They believe in the effectiveness of high quality testing for perfect software development. Contact us for your software development that will help to achieve the success you deserve.