While test automation is the key to increased quality and productivity, the act of test design is often the completely overlooked aspect of test automation. Test design is a separate and distinct task and is performed before tests are executed. It is a crucial part of any testing process and has direct impact on your test automation’s bug detection capabilities.
In this blog, we will discuss two very exciting and powerful automated test design methods and how they can empower your test automation to foolproof your application against errors, which were earlier untouched by your test automation.
Test automation, even still today, is primarily focused on automating test management and test execution while test design remains largely a manual activity. At large, test design concerns with making the decisions on
- what to test in the first place,
- how to stimulate the application and with what test data values, and finally
- how the application should react and respond to the stimuli provided.
Combinatorial testing is based on the premise that many errors in software can only arise from the interaction of two or more parameters. This is called interaction principle, which states that most software failures are induced by single “factor” faults or by a combinatorial effect of two factors, with progressively fewer failures induced by interactions between more factors. This is a practical hypothesis suggesting that if there is a fault that manifests with a specific setting of input variables, it is most likely caused by only a small subset of those variable values. This implies that software faults can be discovered by relatively simple and small tests.
OS = Linux, Windows, iOS, Android
Browser = chrome, Firefox, Edge
With pairwise testing we aim to generate tests where two variables are in "interaction". All the interacting pairs are automatically identified for example, (Linux, Chrome), (Linux, Firefox), …, (Windows, Chrome), …, (Chrome, enabled) and (Chrome, disabled). After which we create a small subset of test cases covering all the interacting pairs. It has been shown empirically that by choosing the input data values this way, we significantly increase the likelihood of finding software faults while keeping the number of test cases relatively small. Although, a lot can be achieved by pairwise testing.
Test Data Generation
We are often faced with problems where we need to test the application with valid and invalid email addresses, IBAN bank account numbers, social security numbers, string encoded IP addresses, and so on. Experimental evidence and practical experience reveal that it is extremely difficult to create sufficient and proper test data for the design of test cases that comprehensively covers the software logic for any non-trivial software system. This becomes a major part of test design that takes significant effort, experience and skill to excel manually.
To alleviate this fundamental problem in test design, Qentinel Pace deploys a systematic analysis approach for automatically identifying and generating test cases for covering “corner cases” of various types of data. The algorithm does not generate test data in random but instead it can be considered to be a “boundary value analyser” generalized to arbitrary data patterns as it is capable of identifying corner cases that are crucial to analyse and verify thoroughly during the quality assurance process.
Combinatorial Testing and Test Data Generation
As you have guessed correctly by now, you don't need to restrict yourself with generating tests singly either by combinatorial testing or by test data generation. They can be used in conjunction with each other. And this will empower your test script to explore a whole new landscape of potential data combination to test your application against.
Test Generation in Qentinel Pace
Test generation happens right before test execution. Qentinel Pace looks at your test cases and initiate test generation if it is required.
Once test generation ends, you are notified about the generated test cases, which are scheduled for execution. You can review and analyse the generated test cases and gain further insights into them. Should you realise that you have generated too many cases by mistake and you do not necessarily want to execute them all, you can abort the execution process. Moreover, should you realise that a set of generated test cases is of special significance, you can include it in your regression test set for good.