One core tenet of DevOps is to make data-driven decisions to maintain high quality under the pressure of frequent production deployments. This calls for no-nonsense metrics and a DevOps dashboard that gives actionable insights on what you should fix to improve quality or speed, every day.
I can’t help noticing how widely software teams are leaning on DevOps to deliver features and fixes faster. However, that is easier said than done. Most teams I have met collect data to measure something but fewer use the collected data for making decisions regularly.
Finding the no-nonsense DevOps metrics
The most typical measurement pitfall that I have noted is focusing on vanity metrics i.e. metrics that might look nice but don’t help you improve anything. Another common issue that makes teams forget their dashboards is focusing only on the lagging indicators – the outcomes. Those metrics may measure relevant things but they don’t tell what factors have caused the results.
You can avoid vanity metrics by linking the metrics to your goals explicitly. In my own everyday work, the intent to deliver fast with quality boils down into the following four important questions:
- Release quality: Is the current release candidate ready for production and if not, what should I fix?
- Production quality: What should I focus on to minimize the risk for service outages and other quality issues in Production?
- Velocity: How can I accelerate the speed of delivering value?
A practical DevOps dashboard needs to help in answering these operational questions.
Value Creation Model reveals the causes for your results
We can find the leading indicators, so the factors affecting the goals, by taking a systemic view to the DevOps process. I have found Value Creation Model to serve well this very purpose.
The following picture presents a Value Creation Model for DevOps. Each node is a measurable factor. The arrows indicate the assumed causalities among the nodes and thus the causality chains reveal the factors that have an impact to speed or quality. A blue arrow denotes a positive causality between two factors. For instance, when Production deployment frequency increases, so does the Pace of value deliveries. A red arrow means that the variables move to the opposite direction. The higher is Technical debt, the lower is Velocity.
Measure and improve Value Paths
Value Paths are particularly important causality chains in the model, each leading to an important goal. There are two paths, Path of Quality and Path of Flow Velocity. For the Path of Quality, the chain of metrics starting from Technical debt provides the leading indicators for Release quality which in turn correlates with Quality in use.
Path of Flow Velocity has two branches. Release quality predicts the readiness to release software to production, affecting Production deployment frequency and eventually Flow of value. Another factor affecting Flow of value is (Flow) Velocity which depends on how much Technical debt the team needs to deal with and how much unplanned work, such as bug fixes, the team gets into its backlog. Often those unplanned work items are bugs from customers. Quality is a big contributor for speed.
The DevOps dashboard should be organized according to the desired DevOps outcomes: Quality in use, Flow of value, and Happiness of the team. Release quality plays a crucial role both for speed as well as for Quality in use so its status should be definitely tracked, too.
A practical way to make the assumed causalities among metrics visible is to organize the metrics into metric trees where the leading indicators are presented as child metrics under the each outcome metric.
There are normalized index scores (target = 100) for each metric tree. This approach allows us to present metrics with different scales and measurement units on the same scale. Index values are easy to judge in terms of ‘good’ (> 100) and ‘bad’ so you don’t need to be a subject matter expert to interpret each result. As the example shows, there can be metric trees under metric trees (and index for each) so it is possible to calculate a DevOps Value creation index for ‘everything’ and observe the trend.
From the dashboard screen shot we can see that Release quality is red and its index value is way below 100, only 72.9. By examing the child metrics we see that automated test pass rates look fine but there is one open critical defect, probably found in Exploratory Testing, which explains the low index value for Release quality.
Now that the metrics trees have been organized according to the assumed causalities, it is easy to find the right levers to turn. With a quick look at the red circles in the dashboard we can find the issues we need to fix to achieve our goals.
Modeling the causalities among metrics opens exciting opportunities also for leveraging Machine Learning. Machine Learning algorithms can detect trend changes and patterns more effectively than human eye and make predictions as well as recommendations. Value Creation Model gives a nice starting point for building Machine Learning capabilities for analytics.
DevOps dashboard provides actionable insights
Concluding, a no-nonsense DevOps dashboard provides actionable insights. The metrics must be linked to the goals of DevOps, so speed and quality. The DevOps dashboard needs to answer everyday operative questions on quality and speed and for that we need to know what factors are affecting team’s results – in a good or bad way.
I have found Value Creation Model useful as a tool for identifying practical metrics and their leading indicators. The presented approach to organize the DevOps dashboard according to the DevOps Value Paths using metric trees gives practical indications for what actions you need to take to achieve your quality and speed goals.
If you want to learn more about Quality Intelligence for DevOps, have a look at my full white paper How DevOps creates value and how to measure it by clicking the link bellow.
Or start your own test automation cases right now and SIGN UP for FREE VERSION.
Juha-Markus Aalto is Director, Product Development and Operations at Qentinel and he leads the DevOps team that builds and operates Qentinel Pace SaaS product for robotic software testing. He is SAFe® Contributor and has acted as SAFe Advisor in the large-scale Lean-Agile transformation programs supported by Qentinel.