Skip to main content

Posts

To perform data-driven testing- which is better (UI/API)?

For sure we can do data-driven testing using automated UI tests (Selenium, Cypress, WebDIO, etc.) but just because it gives us the option, doesn’t mean that we should it for all cases through UI only. Take an example where we visit any website to get a price quote on any policy — let’s say a medical policy, what do we do- we run our test by going through the same page(s) like select value from few dropdowns, select few checkboxes, and enter values in few text fields to get one final output i.e. the “ price quote ” of your medical policy- here the variation was only the “ Test Data ” that finally derive a  certain output . Don’t you think that to test this business logic, the efficient way would be to perform API testing which is better maintainable and powerful? As I mentioned in my previous post too, our UI (end to end) should be meant to confirm that user(s) can use our application in the way it’s intended to do so, perform interactions with it without hitting any issues and always w

Mock GeoLocation using Selenium 4

If there is a use case where you need to override the geolocation of the browser before you hit the website, Selenium 4 Alpha version does support that now by using the power of Chrome DevTools and Protocol (CDP) along with setGeolocationOverride method present in Emulation class. To set it back to default, use the clearGeolocationOverride method. #mockGeoLocation #testAutomation #selenium4Alpha @Test private void testGeo() throws InterruptedException { driver .get( "https://www.google.com/maps?q=28.704060,77.102493" ); Builder<String, Object> mapLatLan = new ImmutableMap.Builder<String, Object>(); mapLatLan .put( "latitude" , 40.712776); mapLatLan .put( "longitude" , -74.005974); mapLatLan .put( "accuracy" , 100); ((ChromeDriver) driver ).executeCdpCommand( "Emulation.setGeolocationOverride" , mapLatLan .build()); driver .get( "https://www.google.com/maps?q=40.712776,-74.005974&q

Adaptive Wait in TestProject

Wait for a certain condition(s) like eg. click, type, etc. to meet and then perform an action is required in almost every UI automation test case. And most of us break our head too much on playing with different dynamic (async) waits which are never-ending practice especially when you have a common framework for your m site as well as your desktop browsers. When I saw that TestProject announced something called Adaptive wait Capability, I tried it and it works wonder. So, if you are already using TestProject, give it a shot. Must say that Adaptive Wait = Smart wait here. Especially when we struggle too much due to different net speed/connections and device physical resources bottlenecks. Check their official documentation for the same here: https://docs.testproject.io/tips-and-tricks/explicit-wait-and-adaptive-wait and https://blog.testproject.io/2020/05/04/testproject-adaptive-wait-capability/ hashtag # TestProject hashtag # automation hashtag # AdaptiveWait

Roles and Responsibilities of QA in Scrum

Created this deck last year: "Roles and Responsibilities of QA in Scrum" Please share your thoughts if you see something terribly wrong and name it. If you have anything to add, I’d love to read and discuss it. hashtag # qa hashtag # agile hashtag # agilemindset hashtag # agileteams

Defining the scope of our End to End tests

We all should try that our End to End test should NOT considerably reiterate the test efforts of our Unit test and API test. Ideally, our end to end should be meant to confirm that user(s) can use our app in the way it's intended to do so, perform interactions with it without hitting any issues and always work when performing any E2E transaction(s). On the other hand, our Unit and API test should test and cover business logic.  Many a time, we go overboard to reach 100% coverage, create an automated mess in terms of freezing a scope of our End to End tests and we might reach to a point: - Where our test source code becomes even the same or in fact, more than the application codebase - And/Or automation execution run time is likely to take the same time as it takes to write the test case, etc. There's no silver bullet here and it's all about trying, failing and finally succeeding. One such approach can be: Divide your test cases broadly into two major groups

Working Set feature in Eclipse and other IDEs

The "Working Set" is a very old feature in Eclipse or other IDEs but there are many folks who don't use it and prefer to hide non-working project(s) by either closing the project(s) itself or deleting those from workspace especially when they have to demo something to someone. It is a super useful feature that lets you group your related projects to ease search and organize views within the IDE. Read here about what it is and how to use it: http://www.avajava.com/tutorials/lessons/what-is-a-working-set-and-how-do-i-use-it.html

JSON to POJO

In case you have JSON or JSON-Schema (simple or complex) that you want to map into a POJO without hustling too much to write a complete POJO class, then you can use the jsonschema2pojo UI or library. It is an awesome library that lets you create Java classes using your input JSON. Plus, it supports many Annotation styles such as Jackson, Gson, etc. Using UI: http://www.jsonschema2pojo.org Or https://github.com/csanuragjain/extra/tree/master/convertJson2Pojo (Git Repo by Anurag Jain) P.S. There are many other ways to achieve this and this is just one of them.