Whenever a bug is fixed, the tester runs confirmation tests on it in order to confirm if the bug was resolved. This guide covers the basics of this confirmation testing and answers the related questions.
In this article, we’ll cover the basics of Confirmation Testing: what it is, when & how to run confirmation tests, what to do after you’ve run these tests, and it’s salient features.
We’ll also cover challenges, advantages and disadvantages of confirmation testing. Our intent is that the reader, after absorbing all the information here, can make completely informed and unbiased decisions about whether to run tests and in what capacities to do so.
Table Of Contents
Confirmation testing is a type of software testing technique in which the software-under-test is run through a set of previously run tests, just to make sure that the results are consistent & accurate. The intent is to ferret out any remaining bugs and check that all previously found bugs have been truly eliminated from the software components.
Basically, all tests run earlier are run once again, after the bugs found in the first tests have been fixed by devs. This testing is also called re-testing because it is literally running the same test twice – one before finding bugs and one after.
Generally, when testers find a bug they report it to the dev team that actually created the code. After looking through the issue, the devs fix the issue and push another version the feature. Once QAs receive this scrubbed version of the software, they run tests to check that the new code is, indeed, bug-free.
Note: While running this tests, QAs need to follow the defect report they have created earlier to inform developers of the bugs they found in the software at that stage. QAs must run the same tests, and check if any of the previous (or even new) functional anomalies show up.
As you know now, Confirmation testing is a vital checkpoint in software development. It ensures that issues found during earlier testing are fixed before the software goes live.
Here’s how it works:
Let’s say a compatibility test shows that the software-under-test does not render well on the new iPhone. The bug is reported to the devs, and they eventually send back the newer version of the software/feature after fixing the bug.
Of course, you believe the devs. But you also run the SAME compatibility test once again to check if the same bug is actually eliminated permanently.
In this case, the compatibility test being run twice is a confirmation test.
Confirmation testing doesn’t involve specific techniques because it essentially re-runs previous tests. However, there are key aspects to consider during the process:
Planning and Preparation:
Execution:
Analysis and Reporting:
Confirmation testing stands out from all other testing types due to several key reasons:
Testsigma is easily integrated with confirmation testing workflows, simplifying its use. Read about the step-by-step process and view related screenshots.
Pinpoint the tests that exposed bugs in your test or previous Testsigma runs.
Ensure you have the same or similar data that triggered the bugs initially.
Instead of creating a new test plan specifically for confirmation, simply re-run the existing test plan that contains the identified test cases. Testsigma allows re-running plans or specific test suites within a plan.
Here is a screenshot of the same plan being rerun.
Pay close attention to the results of the previously failing tests. Look for any signs of the bugs persisting. Testsigma’s reports will highlight failures.
If a previously failing test still fails, the bug might not be fixed. Report it with details for further investigation.
You can see a successful Test Run in the screenshot below.
If new bugs emerge during confirmation testing, report them using the same process.
Unlike most other software tests, confirmation testing doesn’t have any specific techniques. You literally just run the same tests twice. As soon as a bug has been resolved, put the software module through the same tests that led to the discovery of the bug in the first place.
If the same bug (or new ones) do not emerge and all confirmation tests pass, you’re done. If not, testers must re-examine emerging bugs, reproduce them, and provide deeper, more detailed reports to devs. Reappearing bugs can indicate deeper flaws in the underlying system.
Note: If you need to execute these tests multiple times in the future then they become a good candidate for test automation.
Once confirmation tests have confirmed that no live bugs exist in the application, the software can be moved further along the development pipeline. You simply push it to the next stage of testing/deployment.
The whole point of this testing is to ensure the accuracy of bug elimination, thus making the software more reliable and worthy of customers’ positive attention.
However, often, confirmation tests are immediately followed by regression tests. Since one or more bug fixes have been implemented on the software, the regression tests check that these changes haven’t negatively impacted any of the software functions that were working perfectly well before debugging took place.
Confirmation tests are a necessary (if somewhat inconvenient) fail-safe that testing cycles need to push out truly bug-free products. The challenge lies in designing and scheduling these tests without stretching the timelines to unacceptable levels. However, the efficacy of these tests is beyond question, and they absolutely deserve a place of pride in your test suites.
Is confirmation testing and regression testing the same?
No, Confirmation testing checks that previously identified bugs are actually eliminated after debugging. Regression tests check if the entire software system has been negatively affected by code changes.
What is the difference between confirmation testing and retesting?