Scaled Agile Framework (SAFe) updated its metrics guidance in May, 2021. Now it suggests measuring Outcomes, Flow and Competency. I’ve been a big fan of Flow Framework and used Flow Metrics since autumn 2019 to lead my team. I was thrilled to see Flow Metrics finally in SAFe!
But, when I think of my own everyday work, one essential dimension is missing: Quality. SAFe does embrace building quality in, big time, but now there seems to be no quality metrics in the metrics guidance.
Flow Framework mentions Quality but as a business result, so outcome. Quality in Flow Framework refers to ‘quality in use’ so quality as customers perceive it. I don’t think great quality just happens, or that large, complicated systems will achieve competitive quality in use if you only define the required Definitions of Done as SAFe suggests. Just like Outcomes, Flow and Competency, also Quality needs to be measured and managed with actionable metrics.
How to measure Quality in an agile DevOps team? You should measure – both in QA and Production environments – at least:
- Test pass rates of automated end-to-end tests
Test pass rate trends of automated test suites tell you immediately whether you have some regression or whether your test cases are out of synch with latest functional changes, and how fast your team is able to fix the issues and get the pass rate back to 100%. “Shift-right” tests in Production tell whether the essential use cases still work fine for customers.
As shown in an example graphs from a real project below, the Production pass rate trend obviously has less volatility than the QA one. But both require actions when results deviate from 100%: fixing a bug or a test case, or resolving an environment issue. You may also notice process bottlenecks by analyzing the trends and know the context behind the numbers.
In practice automated regression testing needs to be complemented with Exploratory Testing to guarantee sufficient test coverage of the new functionality in particular. It is important to track bugs found in Exploratory Testing or originated from the customers and other sources.
Defect inflow/outflow trends tell whether your team is able to fix more bugs than tests are finding. Defect inflow alone is also interesting. Low defect inflow is not often something to be proud of; it can also indicate insufficient testing. Making the defect inflow and outflow transparent allows the team to understand what’s going on and make right decisions.
The example graph below shows higher outflow than inflow so this team seems to be investing in paying quality debt. The number of open defects and probably also Flow Load can be expected to decline.
Defect inflow-outflow graph is also useful for teams that have an exceptionally large or difficult platform or product release under development with a bit longer release cycle. The point when outflow beats inflow is an indication of system finally starting to mature.
The sheer number of must-fix defects tells how close or far you are from the release readiness. Such a metric can be easily implemented by using severity classification of defects in a systematic way or with MUST-FIX tags to identify the showstoppers for a release. The absence of known must-fix defects and automated test pass rate criteria can be also attached to automated deployment pipelines. A release candidate can be released if its measured quality is as good or better than the one in Production.
These basic test and defect metrics help any team start managing the most essential elements of software quality with data. Software quality measurement can then be extended to other elements of quality, such as performance and security, and to quality in use metrics such as Net Promoter Score and service availability. I’m looking forward to seeing SAFe Metrics guidance written one day as “Measure Outcomes, Flow, Quality and Competency”.
Learn more about Quality Intelligence for DevOps, and have a look at white paper How DevOps creates value and how to measure it by clicking the link bellow.