If you’ve ever had anything to do with banking, financial services, insurance or even health care, chances are that you’ve unwittingly come within range of what’s known as RPA, or Robotic Process Automation.
In its simplest form, an RPA is a piece of software that you build to use another software application. It is often encountered in the sectors I mentioned above because they involve high volumes of data, complex information systems, and repetitive, rules-based tasks where using human effort might not only be costly, but where errors may also be equally damaging.
The technology behind RPA essentially involves the same kind of software automation that Qentinel has been working with for 15 years in automated quality assurance. In our applications, a script will run operations repeatedly until it detects a defect or flaw that needs to be addressed. It would therefore be fair to say that we are one of the most experienced RPA firms around.
RPA offers speed and efficiency, reduces the risk of errors
You might well ask why you would build one application to use another one. Apart from speed and reduced risk of error, another reason is efficiency – a bot can sort through large amounts of data far more efficiently than an individual. And efficiency often results in savings.
Firms may also resort to using RPA in cases where it would be costly, difficult or impossible to modify existing systems to perform these routine tasks. In such cases, it is possible to augment the functionality of legacy systems by applying an RPA solution. Old and complex information systems tend to resist even the best efforts to add new functionalities.
RPA technologies may also come into play when firms have to integrate or transfer information or data among different systems over which they have no control.
Managing the risks of Robotic Process Automation
There are risks associated with using RPA, however, and they are all based on the fact that while robots seem superior to humans because they are fast, cheap, always-on and seldom prone to error, they are not very intelligent and lack (human) judgment. Faced with incomplete or ambiguous information, a bot hits a brick wall.
RPA also involves security concerns. A malfunctioning or hijacked bot is capable of inflicting much more damage than a careless or malicious human and because its work is embedded in software, such damage may go undetected for some time.
There are ways to manage these risks, however. Integrated self-diagnostics help bots to detect errors they make, while exception checking allows the robot to recognize errors and either stop an operation and wait for human intervention, or forward the error report for a human to correct. Obviously, as the bots are software, they also need to be thoroughly tested.
There is already some rudimentary intelligence in the RPA. When we start adding even simple versions of artificial intelligence (AI) that allow robots to learn, instead of asking for human help, they will be able to address errors independently. It is easy to imagine, though not to create, a bot that can develop human-like consideration and judgement.
The question we at Qentinel will be following closely is, “Will software robots be a lasting or temporary technology? If they are not here to stay, what new technologies will replace them?”
Esko Hannula, CEO of Qentinel Group, is a seasoned executive and business thinker. He has more than 25 years’ experience in forerunner positions creating unique and revolutionary products, solutions and technologies. Read more: eskohannula.com.