Combining manual and automated testing

Jul 30, 2020 / by Nikhil Sharma

Is your current deployment frequency choked by your testing speed? Is regression testing driving you to boreout? Do you think test automation is a tough cookie to crack? Have you wondered on how to best leverage robots and humans in software testing? Are you benefitting from the speed of robots and ingenuity of humans? Are you utilizing your manual testers’ capabilities well? If you are a tester, a developer, a test manager or a product manager you probably deal with these questions in your everyday life. 

I have written on software testing in general and test automation in particular. In this article, I will focus on finding synergy between robots and humans in software testing: how they can best benefit from each other and how can you have improved confidence in your code with that synergy.

At large, our life-experiences are married to the software we interact with. From how you wake-up, to how you make your breakfast, to how you commute to work, to how you actually work, to how you do your hobbies, to how you receive and send out information, to how you make friends, to how you find partners, to how you do groceries and if you are tracking your sleep-quality, in your sleep too. Success of a business is married to software too, either your business is a software itself and/or it relies heavily on software for continuous operations. 

Effectively, our experiences, products and businesses are transcribed in complex codes, carefully tested and delivered by people like you and I. Our main goals are to ensure that the software works unfailingly every single time it is used, and it provides an ever-enhanced experience to its end users. Invariably, a quality software is carefully tested at many levels and many times over. 

Test automation v/s manual testing: Repetition v/s ingenuity

Undoubtedly, there have been significant advances on the artificial intelligence front, and we have witnessed their participation in our daily lives. And most certainly, test automation industry has also ridden on this bandwagon. However, the implementation of artificial intelligence within software testing is so far limited to object recognition, computer vision, self-healing capabilities etc. I am intending to say that most certainly, software testing does have benefitted from advances in artificial intelligence and other technologies. Nevertheless, software testing is not yet at a place where one could boldly discard the need of manual testing in its entirety.

Certainly a no brainer that the strongest and the most successful candidate for test automation is still the set of tedious and repetitive tasks in software testing. Automation of such tasks not only accelerates your testing; it also frees up brainpower for significantly important creative testing efforts.

On the other hand, the creative challenges in software testing efforts are yet best solved by our ingenious brains. The right-end of testing is another stream that is majorly done by humans. The end-users are the ultimate judge of your software’s quality and no test automation so far has replaced the feeling side of testing. We shall look at this division of tasks in the following paragraphs.

Robots’ cup of tea

Theoretically, given the needed amount of time and money, basically anything can be automated. However, whether it is worth the investment is a determining question. In the wake of previous paragraphs let’s look at the best tasks for automation in software testing:

Test environment provision and management: This perhaps is one of the less talked about aspect of test automation. With the advent of infrastructure as a code/service, the most scalable test automation systems leverage the auto-creation, dynamic scaling and management of test environments. For the sake of completeness, a test environment is an environment where your test cases run, it could be a physical machine, a virtual machine, a docker container etc. whether locally available or in cloud. Robots are pretty apt at spinning up fresh environments with need, dynamically scale them, allocate them and wipe them clean, when not needed. A typical example is test environment automation when testing cross-browsers, cross-operating systems, cross-mobile devices etc.

CI/CD pipelines: The abovementioned topic is in a way a subset of this. Everything starting from a code contribution, static analysis, integration, smoke testing, integration testing etc. to deployment is taken care by CI/CD pipelines. Your software swims through these pipelines from several times in day to several times in a month. Most of this is initiated with event triggers and proceeds with a pre-determined flow. Robots love such tasks.

Regression tests: Major chunk of software testing efforts is spent on regression testing. The core idea of which, is to ensure that any new changes in your software have not broken the existing functionalities. As you can very well imagine, every new feature becomes an existing functionality at some point and hence, the regression testing is an ever-increasing phenomenon for an evolving software. Not to mention that the frequencies of conducting regression testing and your software release cycles are directly related. Consequently, the more you develop and the faster you develop, the steeper your regression testing efforts are going to grow. A perfect delight for a robot.

Smoke tests: More often than not, companies have a subset of tests derived from their complete regression test asset. This subset checks the first sanity of your software before it moves any further in your CI/CD pipelines. As regression tests, smoke testing perfectly qualifies as a task for a robot.

Critical End-to-End tests: The success of a business often depends on critical end-to-end processes. Perhaps, this is more relevant in ERP systems but by no means limited to them. These critical processes more often than not, run through a landscape of several applications and hence, it is very critical that the ecosystem of such applications is working uninterruptedly. Automating such tests will take you a long way. They are not the easiest bunch, however they are certainly the most critical bunch. Thus, you would want to know of any interruptions in the ecosystem way before it puts your business on hold. You could definitely benefit from putting a few robots for such monitoring tasks.

Data-driven tests: Scenarios where same test cases are populated with different set of data to verify functionality of a software. As an example, imagine a web store which has exactly similar buying process irrespective of the geographic location where it is accessed from, however, the product specification and pricing varies pertaining to the location. If there are 10 test cases and 10 different locations, you need to test for 100 (10 X 10) combinations. A sweet task for a robot to generate such test cases and automatically test them.

Sweet spot for humans

Primarily, there are two categories of tasks in software testing that humans should do. I shall refer to them as creative tasks and not-robot worthy tasks. Creative tasks scream out for thinking, exploration, experience and intuition. Whereas, not-robot worthy tasks are the ones with an insignificant to negative return on investment when automated. Let’s look into not-robot worthy tasks first and move towards creative tasks.

Not ready or unstable features: Building test automation is a cost intensive effort. Features which are either not ready or unstable would seldom have a repetitive and consistent test cases. Therefore, the opportunity for you to leverage from an automated test is highly insignificant and you would be better off leveraging manual testing here.

Automation repelling features: Certain features in an application are intentionally there to resist automation. On the other occasions, certain areas in an application under test are not designed to yield deterministic results. Consequently, automation by itself could not yield concrete results without a massive human intervention. I would argue that for now such testing should be assigned to humans.

Pareto’s 80-20 principle: If I may loosely say, I would claim that the 20% of your tests that will eat up 80% of your testing resources should rather be left non-automated.

Exploratory testing: Exploratory testing, as the name suggests is testing by exploration. It is the more creative part of testing as against regression testing. Please note that exploratory testing is not testing at random. Exploratory testing calls out for cognitive engagement of the testers with the application under test. The testers are often pretty experienced, and they interact with the application through the ever-evolving system models in their brains. From personal experience, a significant number of bugs are discovered through exploratory testing and it is a practice of utmost importance.

Usability testing: It may be referred to as the ultimate testing of a product. The end-users are the ultimate judge of the quality of a product. The core idea is to measure the end product’s readiness to serve its intended purpose. A lot goes on here with the feeling and intuition part of our brain, which so far lies majorly out of the robots’ hands.

Word of caution, this list in no way is a comprehensive one. However, the key idea here, is to give you a fair understanding of how to bring the two competences work in harmony. A wrong focus in the division of tasks between robots and humans can be catastrophic. On one hand, you are looking at significant investment in automation with low to negative yield. On the other hand, you will be inaptly placing the competition of manual testers, perhaps driving them to bore out and low job satisfaction. Eventually, this will considerably affect your product quality.

Human efforts should be re-directed towards enabling robots to churn test automation and in turn, robots should empower humans to focus on more imaginative aspects of testing. The near ideal testing scenario is when the robots are augmented with human ingenuity.

Start using test automation now, sign up for a free trial:

Start Free Trial

Topics: Software Testing, Test Automation, Software Development, manual testing

Nikhil Sharma

Written by Nikhil Sharma