Computer vision is running “under the hood” in many consumer applications, but it’s also being used to help tech professionals address tricky software problems and open up new avenues for business efficiency.

When consumers use the popular multimedia messaging app Snapchat to layer bunny ears or puppy noses over their photos, they are unwittingly drawing on the power of computer vision to create the playful images.

In another scenario, computer vision helps ensure that speeding tickets can be issued to the owner of the right vehicle, based on license plate detectors in speeding cameras, for example. In such instances, the task of computer vision is to draw on a huge amount of data about objects and the position of images in a mass of data and to make sense of them.

As these examples illustrate, computer vision involves trying to extract semantic information from images, in the same way that the human brain can add context and meaning to make sense of what it sees.

Peeling back the layers with computer vision

The average user of technology is largely oblivious to what computer vision is accomplishing in the background of everyday apps. On the backend of technology however, expert users like Qentinel software professionals are increasingly using computer vision to deliver insights into what’s happening under the hood of different kinds of applications.

Technology professionals can use computer vision to peel back the layers on products or software that cannot be accessed in any other way, such as older applications. In cases where there is no direct access to the software being used, testing professionals can use a browser to delve under the hood and examine the elements that are built into it. This offers a valuable tool for tackling tough software challenges where the only thing professionals can do is work with what they see.

Even if it is already possible to access or examine underlying software, computer vision can be used as a support mechanism for detecting elements and finding meaning in what is displayed on the screen.

Support for non-professional testers

Experienced testers can support customers by writing test cases, but in some instances, the customer would like to write their own test scripts. In such a scenario, we can use computer vision to try to make the process as human-like as possible by using simple action words and keywords, so the tests mirror actual manual routines.

This ensures that the test case is easier to read and follow. It also means that business users who are not testing professionals can feel confident that they are running tests in the right way. When business users are able to efficiently run software tests, it frees up testing professionals to focus on and solve bigger and more critical technology problems.

Sometimes, software professionals probing below the hood with low-level application program interfaces (APIs) may unintentionally verify that something is on the screen when it is not – leading to a false verification. Using computer vision in such cases can help determine whether or not elements are in fact present, helping to verify or confirm the real user experience and deliver more reliable results.

In the hands of skilled software professionals, computer vision is therefore a valuable tool for accessing previously impenetrable software, empowering non-professional testers and validating the reliability of applications.

Tim Sampson & Antero Vaissi

Topics:

Artificial Intelligence (AI), computer vision


Tim Sampson

Show all posts