Automation & Exploratory Testing — A symbiotic relationship
A symbiotic relationship is defined as a “mutually beneficial relationship between different people or groups”. I’d like to talk about how Automation and Exploratory Testing have this mutually beneficial relationship and how it can help you become a better tester.
If you look back at my blog post here about Exploratory Testing and the Zalando Tech website, I thought it would make a nice segue way (did you think it was spelt segway? Me too — but apparently not) into talking about how you can use automation at a high level, and discuss what we could and shouldn’t potentially automate as a result of the exploratory session.
Why wouldn’t you automate everything?
First things first, a lot of people talk about automation being a silver bullet, we want to automate everything, or the classic “How much of your testing is automated?”
Well, everything is a big thing. It would probably be easier to look at what you can automate, as that will fall into a far smaller category. However, you can’t or wouldn’t want to automate everything because:
- You can’t automate peoples reactions and emotions — People use software. Because of this every single person using your software will interact with it slightly differently, they will feel different things when using your software, which ultimately will contribute to whether they have a positive or not so positive experience.
- You can’t automate things that you don’t think of — I can only automate things that I can think of at a given moment. I can’t automate something I haven’t thought of. I’ll come back to this later….
- Some things will only happen once — We can automate under a specific set of parameters and variables at a given point in time… Such as time (think time of day), state (think about the state of the software, is it a fresh deploy etc), load (how many people are using the software) and many, many other variables. There will be an infinite number of variables, your automation potentially checks all these things at any one point in time. I love what my colleague and friend Stuart Crocker mentioned to me the other day:
We should use ATDD to prove that something CAN happen but not that it ALWAYS does for every permutation.
Once you know the application CAN do something, deploy your Testers, Product Peeps, Designer or fellow Developers to explore the application to find out when and where it CANNOT do something!
- Automation takes time — There needs to be a viable return on the investment you make in your automation. If it’s not viable, then chances are you shouldn’t do it. What do I mean by viable return? I mean will people pay attention if it fails, is it giving you valuable information and do people care about what it is checking. Automation takes time to create, maintain and investigate failures. There needs to be value in the automation for people to spend the time on it.
- Some things require human input — Your automation might check that a page loads, what it likely isn’t checking is that it loaded in a reasonable time, or that the buttons appear in the right place (although this can be achieved using Applitools and other Visual Regression tools) or even that the flow is pleasing for the user.
- You may only want to check something once — In which case it probably doesn’t lend itself well to automating it. You will probably spend more time automating it than checking it. This ties in to the first point around a viable return on investment.
There are a whole host of other reasons why you can’t automate everything. These are some of the top ones, in my opinion of course, of why you can’t automate everything.
Exploratory Testing and Automation?
Some people may believe that if we’re testing something, we should automate it. Hopefully by now, it’s clear that isn’t necessarily the case. If we are checking something, then it lends itself well to automation…. But if we’re truly testing, and if you look at my exploratory testing article, then hopefully you’ll see a whole host of things that we possibly wouldn’t have automated, especially off the bat. For example:
- Testing around the “Got It” message (when trying to add more than the allocated stock to my bag) and how it subsequently removes the entire quantity for the item from my bag, when in actuality it should probably just remove the one item.
- Clicking the “Undo” button and expecting it to add both of the items (in the above scenario) back in my bag, but in actual fact it only adds the one, you might check with 1 item, but 2 might not have been identified as something we need to check.
- Opening up an incognito window and seeing what happens to the stock functionality
Besides the above there are also things we couldn’t have feasibly automated, such as:
- How I felt having to log in and create an account to use the “Wish-List” functionality, and how it took away from a seamless experience.
- How I felt not having size/colour for items in my wish list, meaning I have no visibility over things that are low in stock.
On the flip side, if I was to look at a website like ASOS, I could write automation around adding items to my Saved List, but I would miss how much I appreciated being able to add an entire outfit to my Wishlist on the Zalando website, so it works both ways.
These are all feelings and as things stand, automation can not detect feelings, it can not check for how a human can feel.
Why wouldn’t I have automated the above (first three) scenarios?
Well firstly, I’m not sure I would have thought of those specific scenarios (the first three), they were all around having multiple items in my bag, and were quite specific. I imagine when the original features were tested, they checked that they CAN work, and not that they work every time in a number of scenarios with the variables I used when testing.
Would I automate the above after exploratory testing?
The real answer is, it depends. It depends on the factors I mentioned above, would I get great return on automating this, would people really care if this breaks again, arguably it’s been “live” for a long time, so maybe not.
Personally, I would say if we have a framework in place that supports us checking this feature, and maybe already have some checks in place, I would add the above scenario to that suite, if it didn’t take too much time. This is also where test data set up would come in to play, do I need to spend a lot of time creating data for the scenario? How testable is this feature?Do I need to do this through the UI?
By having automation in place, it can support exploratory testing. It will never replace (in my opinion) exploratory testing, it will never mean I don’t have to do exploratory testing (based on the context). What it does mean is that I know what is covered by automation, and so I can focus my attentions to areas that I feel require it using heuristics. Perhaps there are areas that I feel are likely to break, or perhaps areas that aren’t covered by the automation (think around the above scenarios and why we wouldn’t necessarily have automation in place around that in the first place).
Similarly by having exploratory testing in place, it can reduce the amount of automation you need, and so you can spend your time writing solid automation in places that you feel is valuable and can benefit from it. If upon completing exploratory testing I think we may have missed something, or something would make a good candidate for automation, then of course, we can add it to (hopefully) an existing automation framework.
So to wrap up, if you take anything from this blog post it’s this:
Exploratory Testing isn’t about verifying, it’s not about checking something works. It’s about learning more about the software, about seeing the software as a whole and seeing under what conditions it can and can’t work. It’s about using your unique position as a human to interact and listen to the software about how it makes you feel and feeding that back and making sure that it’s captured.
In a future post, I’ll write up how to create an automation framework that would offer value in checking some of the scenarios that we can identify from the exploratory testing session.