Happy test cases by developers lead to damaged brand
In my first internships as a tester in an internal startup, I was sitting within 10 m to the development and the business. This was a brilliant opportunity for me to learn how differently they think and operate. One of the most exciting observation was to witness a business requirement being translated into a code, and very seldom, if ever the code was an exact implementation of the business requirement.
Anyhow, there was no systematic processes around testing. You could probably imagine the pressure of delivering a new feature in a startup, it is immense. There were at least 5 different independent software components, which were expected to work in harmony for the product to work as expected, leaving the custom integrations aside. For all the five component developers had unit tests in place, which they continuously ran. And the final functional testing part was a bunch of happy test cases executed by the developers, and the rest was left for the end-customers to explore. Someone might call this shift-right approach. Trust me or not this backfired all the time:
- Product image was taking a hit
- Massive amount of bug in production was leading to developers spending a lot of time in firefighting instead of developing new features.
- You can assign a suitable dollar numbers to the two above phenomena.
I took over as a manual tester and oh boy, I was rocking. I was responsible for the exploratory testing of the new features and the ever-expanding regression set. We started to see the benefits right with the next release. This is a cliche but developers and testers most certainly have different brains or let’s say their incentive for works are at extremely opposite end. One finds joy in creating something new, whereas the other finds joy in breaking it. Therefore, developers don’t make for good testers, this could very well be true the other way around.
Manual regression testing was not a joy
We were finding and fixing bug before production. Consequently, there was less firefighting, which means the developers could concentrate on developing new features and the customer experience increased. It did not take more than a couple of months for us to realize that I started to become the biggest bottleneck between development and deployment. It took me around 2,5 weeks to run through the entire testing cycle and this needed to repeat in case of bug fixes. In my defense, I was testing alone, there were 250 end-to-end test cases to be tested on 4 different browsers and 25 end-to-end test cases to be tested on the physical product’s UI. Not only, once the excitement of exploring new stuff faded away, I was not enjoying the regression testing part at all. I started to assume things, become blind to regression bugs and my overall productivity decreased. That being said, I was equally excited for testing the new functionalities.
Starting to working on automation through scepticism and fear
The obvious next step was to automate. We went through the test automation scepticism phase. I personally went through the fear of becoming redundant. Followed by the real questions, where to start from, who is going to do it, which framework to use and how to measure our success. Web application became an obvious choice, since it had the maximum set of regression tests. My internship turned to a full-time job and test automation became my responsibility.
Choice of framework was an interesting one. I had some coding experience, I understood logic (surprise) but I won’t call myself a hardcode developer. Another crucial thing was the readability of test cases. The test cases must be self-explanatory so that the business owner and test manager could review them should the need be. We also had early enough in mind that the framework should support appending functionalities as per our needs. Lastly, we were not ready to commit to an expensive commercial tool, which would need months of negotiation, involvement of an SI etc. After a week of research, I opted for Robot Framework which is a generic open source test automation framework.
Test automation strategy needs to be in place
Honestly, the start was not all that easy. It took me a couple of weeks to get it running properly on my local machine. Quite some installation work for a not that developer guy, browser version, driver version etc. Besides all the initial hurdles, I was able to run my first test case in two weeks’ time from the day we drew a consensus on Robot Framework. After which, I desperately needed a test automation strategy, which tests to automate first, how do we do library development for non-supported functionalities and how we conduct manual testing at parallel.
We agreed on automating web application tests, which were to be run against different browsers. The first test set to be automated were around application availability to different user groups. Between an end-to-end process and an application login process, the latter is an easier choice. In addition to being easy, the test cases involved pretty similar and repetitive steps. The coolest thing about Robot Framework is the ability to append its functionality with additional libraries. We agreed to have a proper development process in place for implementing a library. I would own the library but the code itself will be peer-reviewed by a seasoned developer. In the beginning, we will continue to manually test whatever is not automated. Going forward, once the present regression set is automated, new functionalities will be first tested manually in an exploratory way and later appended to the automated regression asset. Another key strategy was not to set an unrealistic objective of automating everything. Think about Pareto’s rule, first you should probably automate the 80% of tests with 20% efforts and later worry about the 20 % of tests which will require 80 % of efforts.
We had availability issues with our web application. We scheduled running a few availability test cases in production, we would get a heads up on our emails, slack and common monitor whenever they failed. Now we are almost a step ahead of our customers.
In around 6 months, we had around 80% of our regression test cases automated. At parallel, our DevOps maturity grew dramatically. We were continuously building, integrating, testing with the automated test assets and deploying it in a production like environment. The manual tests were something we would do for major releases or for releases that would go to customers. While I earlier took 2,5 weeks for doing all the testing, now we could achieve that in 4 hours. Oh boy, it was a savior. And my work did not finish there, I was automating more tests, exploring the new features, finding new ways of testing our applications and maintaining the existing assets.
Scalable solution with open source requires investing in maintenance
Ain’t a perfect story? From manual testing to test automation, monumental acceleration in the release cycle. Well it does not stop there. Like a wise man said when you progress your problems don’t disappear, they evolve. Our ‘robot’ was a physical machine churning our test cases, that is all we have in the name of test automation infrastructure. Now the robot replaced me to become a bottleneck. Our robot started to become corrupt, its maintenance started weighing down on us and lastly, we could not scale its execution. This might sound simple at first but trust me various surveys have found that test automation experts spend most amount of their time in the building and up-keep of their test environments.
Open source is great, for us Robot Framework was an excellent choice, as it is free and open. The biggest downside with opensource is that in order to build a scalable solution with open source one needs to invest heavily on the expertise in the beginning and on an ongoing basis for its maintenance. No wonder that in parallel to almost every open source software there exists a business offering it as a managed solution, which draws best of both the worlds.
Qentinel Pace SaaS product using opensource Robot Framework solves maintenance challenges We also moved our existing test assets to Qentinel Pace, which is a cross-platform cloud based robotic software testing solution, as a SaaS product. Qentinel Pace uses Robot Framework’s executor which means that we could just transport our assets to it and not worry about the test infrastructure. We could scale in cloud, be assured of our infrastructures’ health and leverage the existing test automation assets. In addition to that Qentinel Pace comes with a powerful set of keywords known as PaceWords, which could be used to build test automation against any platform be it web, mobile or desktop native and run that all in cloud. This was an important criterion for us in addressing our future needs. I did not touch on the maintenance effort spent on maintaining test automation interfaces (libraries) in large test automation teams. PaceWords tackle that by providing everyone a standard way of creating test cases.
Start using test automation now, sign up for a free trial: