Skip to main content

Posts

Mutation Testing

What is Mutation Testing? Mutation Testing is an approach that evaluates the quality of existing software tests. The whole idea is to modify a small part in your code (mutated version code- faulty seed) covered by tests and check whether the existing test suite will find the errors and reject this mutated code. If it doesn’t, it means the tests are less strong and does not match your code’s complexity and thus leave many aspects untested. The changes introduced or injected in the program code are generally referred to as ‘ mutants ’. Let's take an example now- say we have a function where we take the monthly Total income of a family as an input and then decide whether they are eligible for a subsidy of Gas or not. If it is equal or less than ₹10,000, give them a subsidy.  It will be something like: -  Input the monthly Total income -  If monthly Total income=<₹10,000 -  Gas Subsidy= Yes -  End if (Gas Subsidy= No) For testing, our test data inputs will be like 9999

Should not call "automate testing"

You can’t automate testing; you automate only the checks (tests)... Still baffled? Checks can be automated for sure but testing can NOT. When we interact with any application, we use our human brilliance to know the functionality of that application and then based on our intelligence we judge whether the behavior (use case) of that application is right or wrong — Can that judgment be automated? So far- Nope. Can we automate whether the behavior of that application is right or wrong on performing certain action/test? Yes.

How to set the browser's zoom level via JavascriptExecutor in Selenium WebDriver (Java)

Create generic methods like: public void  zoomIn() { zoomValue += z oomIncrement ; zoom(z oomValue ); } public void  zoomOut() { zoomValue -= z oomIncrement ; zoom(z oomValue ); } private static void  zoom( int level ) { JavascriptExecutor js = (JavascriptExecutor) driver ; js .executeScript( "document.body.style.zoom='" + level + "%'" ); } And then call ZoomIn() and ZoomOut() wherever you want. Complete sample code: import org.openqa.selenium.JavascriptExecutor; import org.openqa.selenium.WebDriver; import org.openqa.selenium.chrome.ChromeDriver; import org.testng.annotations.Test; import io.github.bonigarcia.wdm.WebDriverManager; public class zoomTest { public static WebDriver driver ; private int  z oomValue = 100; private int  z oomIncrement = 20; public void  zoomIn() { zoomValue += z oomIncrement ; zoom(z oomValue ); } pub

Code Smells and Refactoring

Knowing or unknowingly we all introduce code smell in our test automation code and thus I feel that after every 3-4 sprints, there should be a dedicated sprint for Refactoring for our test automation code. It's a very important part of any software development and thus we should constantly review our code for bad design and try to chuck out any kind of code smell. Code Smells : Code smells and anti-patterns are usually not bugs and they do not currently prevent the program from functioning. In-fact, they indicate poor design and implementation in software that may be increasing the risk of failures in the future. And thus these are technical debt. Refactoring : Refactoring is the process of changing a software system in such a way that it does not alter the external behavior of the code yet improves its internal structure — Martin Fowler Here are the common causes of Code smells: 1) Comments If you feel like writing comments for all the classes and methods, first try

False-Positive & False-Negative in automation

False-Negative is a case where your test result comes out to be positive, but in actuality, an issue is present in the system and the functionality is not working as it should. And Vice-versa is False Positive, which is a case where the test execution results show an error, even though everything is working as intended. Fundamentally, both these have always posed a challenge for web automated testing but I guess it’s ok to say that a False-negative is more hurting than a False-Positive, as the former creates a false sense of security (a lie) and would definitely end up costing much more to us. I agree that a False-Positive too consume a lot of our's time and worth. On average 70% of automated test case failures are False-Positives due to which testers spend an average of 1/3rd of their time analyzing, correcting and reporting results that actually should NOT need any attention at all. In fact, with CI/CD implementation, running automated tests every night or after every commit

Which browsers should we test on while doing Cross-browser testing (manual/automation)?

The web browsers we should be supporting should be the ones that our client and customers are using. We can easily get that information: To start with, get the analytics for the most used browser on your application and use that browser as the first candidate for your manual as well as automation efforts. If there are no analytics available for the current application, look for the analytics of similar application and if you are still clueless, use the worldwide statistics*: https://en.wikipedia.org/wiki/Usage_share_of_web_browsers Chrome: 48.7% Safari: 22.0% Firefox: 4.9% Samsung Internet: 2.7% UC: 0.3% Opera: 1.1% Edge: 1.9% IE: 3.9% *As of November 2019. We should select a subset of our tests that perform the functions that may break on different browsers and only run those many tests on all suggested browsers. It saves the total execution time of tests so that we can get your results faster and act quickly. For eCommerce sites, we need to be extra cautious. Look a