Skip to main content

Posts

Showing posts with the label FalsePositive

False-Positive & False-Negative in automation

False-Negative is a case where your test result comes out to be positive, but in actuality, an issue is present in the system and the functionality is not working as it should. And Vice-versa is False Positive, which is a case where the test execution results show an error, even though everything is working as intended. Fundamentally, both these have always posed a challenge for web automated testing but I guess it’s ok to say that a False-negative is more hurting than a False-Positive, as the former creates a false sense of security (a lie) and would definitely end up costing much more to us. I agree that a False-Positive too consume a lot of our's time and worth. On average 70% of automated test case failures are False-Positives due to which testers spend an average of 1/3rd of their time analyzing, correcting and reporting results that actually should NOT need any attention at all. In fact, with CI/CD implementation, running automated tests every night or after every commit ...