A reasonable approach to test company’s business critical information systems such as ERP depends on the visibility to product development. If the system is developed in-house there's likely a good visibility to actual development teams’ work and quality assurance can be involved in the very early phases of product development. Involving QA early can be referred to as shift-left. On the other hand, a system used as a cloud-based Software-as-a-Service (SaaS) without any visibility to development is restricted also from the testing point of view. Only production version testing might be possible, often referred to as shift-right.
Software deliveries differ also from the operations point of view. In externally operated setups, such as cloud services, there may be very little control on timing (and content) of software updates whereas with in-house operations team the installations can be fully controlled. Regardless of the delivery model, it is essential to have visibility to production time functionality of the system. This is a good case for continuous automated testing.
Production quality assurance means verifying the functionality of the software in actual environment where the end users are using it. Production environment likely has a different network configuration, it is accessing production data and it integrates with infrastructure components not necessarily available in the testing environment. End users may access the system through various network connections like mobile networks and may have extra steps like authentication mechanisms in between. Integrated systems should be verified with selected end user scenarios instead of just relying on low level network and process monitoring.
It is important to notice that quality assurance in production environment needs to be continuous. There should be live-visibility to the state of the system so that possible problems are quickly noticed and can be fixed with minimal disturbance to the actual business of the company. The following example illustrates application loading time as a heat map. The loading times tend to be increasing and peak in the afternoon. Even if the actual test cases still pass this kind of view gives an indication that the system load is getting higher.
In another example case, operations team was monitoring network related functionality of the production servers with ICMP echo (Ping), checked TCP socket opening time and even verified the HTTP requests responded correctly. Everything looked good at low level. But the system denied access of selected users. After some manual testing and debugging the issue turned out to be misconfigured access rights for a group of users, problem not visible in testing environment using different authentication mechanism. By running simple automated end to end tests in production environment the issue would have been quickly noticed.
By combining automated low-level tests with simple use case scenarios, the production environment testing can be improved. Even very simple use cases like just logging in to the system may involve multiple integrated components that might otherwise be left out of the production environment quality assurance. Continuous monitoring also makes it possible to follow-up and analyze long term development of the system response times and even allow making AI based predictions of the system state when there's enough data available. If the control over operations and visibility to the product development is low, shift right.
Do you want to know how Qentinel Pace robotic software testing can solve your shift-right testing?