On Monday morning, a new ticket comes through to Jamie. They have to fix a bug impacting some users, and locking them out of their account.
Jamie looks into the bug, they can replicate it. “Aha!”, Jamie says under hushed breath, the bug has been spotted.
Jamie changes some code, checks the user journey again. Voilà! The impacted user accounts can now get back in. Jamie commits the code, and it’s accepted into the production branch and instantly in use by thousands of users.
But wait, now customer service’s phones are lighting up, and hundreds of more people can’t log in. What happened? It turns out Jamie just treated a symptom of the bug - not the cause - and now it’s morphed into something far more powerful because Jamie didn’t regression test well enough.
However, Jamie did check the broken journey - so they shouldn’t get all the blame right? After all, the code was accepted into the codebase with no issue, suggesting that others weren’t doing their due diligence either.
In all honesty though, who wants to go through all the previous tests they’ve done and ensure editing one function hasn’t broken something anywhere else? It’s tedious and could be a huge waste of time looking for a problem that doesn’t exist.
These things don’t have to be manual. In fact, most of the time they shouldn’t be manual. There is another way.
With automated testing, we can cut a lot of the doubt, time and effort from testing. We can write test scripts that describe exactly what we want to happen with no ambiguity, and have the results almost instantly.
What is testing?
Testing is a way in which we can ensure our expectations of the software are being met, and that the application is working as intended. It can help us find potential issues with the application, or identify problems before implementing anything.
It's fairly well agreed that all testing is sampling - that is, you take a few samples of the application's code, design your tests to gain some form of coverage, and then test those samples.
If testing produces failures, it’s safe to assume that other unsampled areas have issues too, and so you need to continue sampling and testing. However, if you meet a certain threshold (e.g. no bugs found in 90% of tests in all samples), then you could feel confident that things are working correctly and release the product.
Manual vs. automated testing
Manual testing is important, and there is definitely a time and place where something will need to be tested manually. For example, it would probably be overkill to start implementing automated tests if your app is small, or you have made a marketing site with only a handful of pages.
However, if you have a dedicated tester on your team, the likelihood is that your application or site is way too big to manually test everything.
The issue with manual testing is that once you have completed a whole round of testing, the code may need to be updated, and you'll now have a product with very different behaviour - untested behaviour.
So you have three choices:
I tested it before and it looks like the problem is fixed, let's just roll it out.
I'll run every test case manually again, and hope I don't have to make any changes and repeat this whole process again.
I'll make these tests automated, that way we can change whatever we want, and I can rerun the tests in a matter of seconds.
Out of all three, the third option looks pretty tempting, right?
With automated testing, we can cut a lot of the doubt, time and effort from testing. We can write test scripts that describe exactly what we want to happen with no ambiguity, and almost instantly have the results inform us of whether the app or web design is working smoothly.
Choose the right automated testing framework
There are many automated testing languages and frameworks out there, like Cucumber, Jasmine and EarlGrey. No single framework is a silver bullet, so you should choose one according to the platform you’re developing for, and any specific frameworks you are already building in.
Easy and efficient automated design testing is becoming fairly straightforward nowadays with front-end frameworks shipping with tools built-in to help.
The beauty of this approach to testing is that as your tests continue to mount up, you can become more and more sure that your application is working correctly. In addition, you don’t have to worry about the application regressing when you add new features or edit functionality, as you will be alerted to the issue within a matter of seconds.
When things are constantly changing, it’s nice to have the added confidence that your code is still working as expected.
You may wonder why we’d worry about the application regressing when you’re writing additional code. Well, bugs come from somewhere. When you’re adding new code to the repository there’s a fair chance that you might also - unintentionally - be introducing a fault.
Avoiding bugs in automated testing
You may be thinking ‘hold on, if our code can be buggy, then surely our automated test scripts can have bugs too?’. You’d be right.
It’s one of those strange dances we do with technology. To mitigate the chance of our code being buggy, we write more code.
Of course, you have to understand how your functions work, and how the automated testing library will interact with your code, otherwise, you may just be annoying yourself with false positives or negatives.
However, automated testing allows you to find issues with the code or the planned implementation before you even execute test scripts.
For example, you might be testing the UI of a dropdown list. In order to write the test, you need to look at the user stories or the requirements of the app. While looking through the list items, you may see one is listed twice, or there is an item in the requirements which isn’t present in the UI. Immediately you’ve spotted an issue before development has begun.
You may think this is uncommon - but I’d argue this is one of the biggest benefits of testing. You are forced to learn the ins and outs of your product, the rules, and how to break them. From this, your understanding of the product and what it needs to do is crystal clear. Things start to click.
In products which focus on reliability and their ability to robustly deal with the most erroneous input and manipulation, it is of the utmost importance to have confidence that the product will behave as expected. You don’t want any surprises when you’re flying a plane or on autopilot in a car.
When to use automated testing
We’ve talked a lot about why you should test, but you may now be thinking of what you should test.
Generally, tests will be generated following a certain technique. Here are some examples to get you going.
Boundary Value Analysis (BVA)
A common technique is called Boundary Value Analysis, or BVA for short. As the name suggests, it helps you analyze values on the boundaries of your product’s logic.
Let’s say your sign-up form allows people to pick a username between 5 and 20 characters long. A valid set of boundary value analysis tests for the lower limit (5) would be:
Testing below the boundary (4, for example: @user)
Testing on the boundary (5, for example: @usern)
Testing above the boundary (6, for example: @userna)
As you’ve probably guessed, the tests for the upper limit would be:
Testing below the boundary (19, for example: @mastereverycreation)
Testing on the boundary (20, for example: @responsivewebdesigns)
Testing above the boundary (21, for example: @outstandingfluidsites)
What we could do is run these tests manually every time we want to make sure the sign-up process is working, or we could write a test script which generates the usernames for each limit and then tests those figures against the product.
We could even be more cheeky, and get the script to generate the usernames for us using random characters - allowing us to easily change the minimum and maximum limits and have the tests automatically update with us. No need to rewrite any test scripts.
Another popular technique is branch testing. First, find which areas of your app make logical decisions that are based on user input. This could be anything - for example, if you had a landing page where the user made a number of decisions:
What price plan would you like to select?
How would you like to pay?
Branch testing would involve you ensuring each decision has gone both ways at least once (and so all decisions have been evaluated to true and false).
For example, one test script could run through the site and select ‘Monthly’ and ‘Google Pay’ as the options, while another test could choose ‘Yearly’ and ‘Apple Pay’. This would mean all options would have been chosen once, and we’ve achieved 100% branch coverage.
However, you may have noticed that ‘Monthly’ and ‘Apple Pay’, and ‘Yearly’ and ‘Google Pay’ were not chosen together as a pair. The method is not exhaustive in its pairings. This is one downside to branch testing - but all techniques have their pros and cons.