Software is growing more and more ubiquitous with our daily living with each passing year. We grow more dependent on software for our daily interactions, from connecting with friends to handling banking.
These interactions with software are growing more and more complex and are even starting to incorporate elements of machine learning or AI. This makes the behaviour of the system less easy to predict and as such more difficult to test and validate. Many black swan situations with software have risen, from optimization algorithms being racist, to self driving cars behaving erratically. This article looks to the future of QA and how we can optimize the QA process to make it easier for ourselves to guarantee software that behaves in ways we want.
All this software is still being developed by humans and thus prone to bugs and failures. Currently the trends in software development companies is to integrate testing and development into one continuous process, where bugs and defects can be catched efficiently as Software companies want to maximize the development speed to speed up the delivery of value to either customers or end users.
One of the main ways to do this is to shorten the feedback cycle of development. This has led to the birth of continuous integration, devops and other movements which chunk work into smaller segments which are then deployed into client environments to gather feedback on that chunck and then gather feedback. In this process testing is still the number on bottleneck in software development. All the other tools have advanced to such a level, that creating software and deploying it is easier than ever, and getting it deployed for client use in a scalable way is even easier. QA has not kept up and is still constrained by resources.
As a test managers in different companies want to prioritize their testing resources in the most efficient way to maximize the impact testing done for each build, the testing tools have to evolve so that we can adopt a faster deployment cycle. On the other hand developers also want faster feedback for their work, down to the minute they write the code, even if the complete testing of the system under development might take multiple hours.
The problem exists in all larger software projects, where the corpus of tests grows so large that runtimes start getting prohibitive. If you are using only cloud resources, the problem can be mitigated by running infinitely parallel in the cloud, but 100% validation still takes too long. With physical test setups you want to optimize the test machine usage to guarantee the best balance between instant feedback and confidence of validation.
Yet solving this problem is feasible because we already have a growing number of tools in our disposal to improve our testing setups. Statistical analysis of test run history for example enables us to improve the test runs we have if the system under test is instrumented enough. This historical log data, for example in the shape of robot framework test logs linked to git changes, gives us a good data background to start forming a minimum amount of tests to guarantee a statistically good result.
The whole field of applying machine learning and optimization algorithms to automatically create improvements in software quality is growing fast and new solutions are offered to answer the problem for multiple directions. To make sense of this change, we can split the improvements in validating quality into different steps needed to progress towards the more and more automated QA, going from manual testing of software functionality, to automatically testing the business value of your software.
There are many examples of companies proceeding to this direction, from automatically debugging code in Facebook, to automatically validating the impact of your changes to your business metrics using A/B testing, where two different versions of a feature are tested in production with different users and the version which gains better traction is eventually deployed for all users. All these tools require a certain amount of testing and development infrastructure in place which will produce more and more data, which will require more and more automated intelligence, or AI to handle. Because of this we have outlined a 5 step framework which shows the steps towards AI powered QA.
Here we break the evolution of test automation towards fully automated quality assurance without humans in the loop into five distinct steps. The steps progress from the testing system only understanding the usage of the software from human written tests towards measuring business KPIs and using these to evaluate the quality of the system. This shift in the data used is one of the key points: To measure the actual quality of the software system being developed, the business value of the system must be explained in terms the QA system can understand. This can also result in the company understanding the core business values of their business in a much more concrete way.
The main differences between the different steps is the data being utilized for the QA process. Each step takes the abstraction of the testing from the actual software functions towards business functions, raising the abstraction level. Each step also adds more complexity and more intelligence into the QA system itself. In the first levels, the scripts themselves only automate really low level decisions, to run or not to run, but in the later steps, more and more decision processes are automated and left to the system.
Read also the second part of the blog: Five steps from automation to automated QA.