1. AI-augmented test analysis and optimization
Testing by itself is done to validate the software solution being developed and guarantee a certain agreed upon level of quality for the product and to make sure no major regression happens in the software.
Machine Learning methods will allow us to bring a larger amount of statistical analysis tools to bear on all the test and telemetry our software pipeline is producing. We will see a surge of different software vendors bringing solutions for different kinds of dashboards and visualizations to provide a way to analyze the validity of our software and provide metrics for just about anything.
Analyzing the validity of these metrics on top of test results has to be done to provide value and actionable insights from the jungle of metrics. Qentinel has previously developed an approach to finding out actionable metrics for a company or project by linking the lower level metrics into higher abstraction level KPI’s through Value Creation Models. This allows us to link metrics into business goals and when combined with Machine Learning methods, we can provide a way to analyze the causalities among the different metrics to prune unneeded and vanity metrics from the dashboards.
To find out more about this topic head on to the blog by Juha-Markus and after that grab the white paper on the subject.
Another approach is in analyzing the assumed metrics causalities by calculating correlations among all the different metrics and identifying redundant metrics and test sets. This results in a correlation matrix between all the sets.
This information can be used to prune old metrics that are not valid anymore from your testing runs and score cards. We can also use this information to quickly run just the most relevant test cases or suites and even develop methods to choose the statistically most important test cases or suites for certain scenarios. Looking at the example picture, where white is strong positive correlation and black strong negative correlation, we can identify metrics sets that correlate with each other and just run some of them when there are no resources or time is a constraint. We can also identify some sets, which do not correlate with any other test set or the complete test run results and as such might be targets for pruning.
2. Raising the abstraction level of test cases
Currently creating tests is still a highly technical challenge and typically in web testing requires you to tie the test cases into certain DOM elements and even do this separately for each platform. This leads to flaky tests that break and cannot be run on multiple platforms and also requires people with specialised skills to maintain. At Qentinel we have expended resources to develop cross-platform test libraries which use Machine Vision and Machine Learning to simplify the targeting of test scripts. We can also use Machine Vision to test older applications which are not easy to hook into with modern test tools like selenium, for example legacy java systems or even manipulate Excel based applications and work flows. See how Ponsse uses automated tests on multiple platforms:
3. Using Machine Vision in testing
Using Machine Vision solutions we can produce simplified and lower maintenance scripts that target human readable features from your applications. This reduces test maintenance when tests won’t break when for example automated scripts build your React frontends with different Id’s which someone has used in test script. Also the human readable nature of the test scripts allows problem domain owners, not coders to write and maintain the test cases. These cases target elements such as a password field and buttons and verify the existence of text in the application. Actually we are not even limited to looking for text, but can look for any visual element from the application. We can even test for key frames from a video to test streaming times on mobile or other difficult to automate solutions as you can see from the example video of minions clips, thank you Antero Vaissi for the great demo video:
I feel that the ability to do Machine Vision based testing has serious implications in manual testing intensive fields such as mobile applications and VR and AR solutions, which are not your typical desktop solutions and rely deeply on the visual context. Also RPA like cross-application use cases where we want to feed data from Excel and verify it from the UI are easy to achieve with this approach.
4. Testing AI
Testing AI and hybrid AI solutions in itself will change the nature of software testing because of their non-deterministic nature. We cannot test AI solutions with deterministic test scripts which are always checking for a certain output for a certain input and must start on relying on statistical testing solutions that run the tests multiple times and verify that the outputs fall within an accepted range. Proving that these solutions will work 100% of the time will start being difficult and black swan situations will happen from time to time. And as such, as professor Kari Systä commented, maybe we should start thinking the same way as the hardware industry and talk about mean time between failures.
Also the problem of what is the accepted range of results is a difficult one. We have to create a model of the accepted answers based on our knowledge of “right answers”. This might introduce unwanted bias into our models where our data that we use to teach the model does not reflect the true state of the world, but introduces a bias into the model and just implementing a quick sentiment analysis can lead to biased results. See blog: How to make a racist AI without really trying
Creating non biased models is hard and evaluating these will be one of the main challenges that I see coming to testing in the future. This can be seen as a part of the classicalOracle problemin testing, where the system in itself does not know the right answer, but has to rely on an outside source of truth, an oracle. This role is typically taken by the human tester and as such I do not see AI taking over all testing in the future, but augmenting human testers, helping them create a large amount of statistically relevant test with easy maintenance for large complex software.
I had the chance to talk in Testauspäivä 2019 (Testing Day) about how AI and Machine Learning have the potential to boost the testing of software significantly and also how testing AI in itself will transform the way we test software.
I was asked to publish my presentation material afterwards and was about to do it immediately after the presentation, but decided that because of my presentation style, the slides themselves won’t tell the whole story and on top of that they were encrypted in Finnish. I felt that sharing just the slides won’t do and here we shall dive into how AI will change testing.
Are you interested in learning more about our tools or discussing some of these points made in this blogpost? Send me an email or a LinkedIn message and lets discuss AI!
To test it out right away, you can try out the free version of Qentinel Pace and start testing.