As DevOps fast became the industry standard for software development, deployment, and management, there grew a need for a valid and repeatable method for assessing its effectiveness. This was especially true as more and more organizations were going for full-blown digital transformations that were prone to failure. In response, a group of experts and researchers emerged to develop a methodology for assessing and improving organizations’ DevOps practices. Today, DevOps Research and Assessment (DORA) has become instrumental in the DevOps space.
What is DORA?
DORA is a methodology that measures and improves the performance of software development and operations teams. It was born out of the collaboration between Puppet and a group of researchers and practitioners in engineering, DevOps, and IT Operations — namely Dr. Nicole Forsgren, Jez Humble, and Gene Kim. Together, the experts devised a reliable and repeatable way to measure the performance of software delivery by looking at specific metrics that have now become the go-to markers for success in development and operations.
It started during the years the research group worked on the State of DevOps Reports. The group realized that there was a dire need for transforming organizations to know whether they were being effective and efficient in their journey. As a result, they developed the assessment for improving DevOps programs, bolstered by statistical analysis of the practices and performance of over 23,000 organizations. DORA quickly took off. The assessment became the standard and the group behind it was acquired by Google Cloud in 2018.
What are DORA metrics?
DORA metrics consist of four key metrics that are used to measure the performance of DevOps teams:
Lead Time for Changes: This metric measures the time it takes for a change in code to be implemented and deployed into production. We should all care about reducing lead times for faster delivery and, in turn, value to customers. With shorter lead times, teams can respond to feedback quickly and make more frequent releases of new features and fixes.
Deployment Frequency: DevOps teams can keep themselves accountable by measuring how often code changes are deployed into production. It reflects how rapidly and with what quality the releases are being made. Making changes to reduce this metric means that teams can release more often, make more fixes, and stay ahead of competition.
Mean Time to Restore (MTTR): Knowing how long it takes to restore a service or application is critical for any DevOps teams and organization. The ability to detect and resolve incidents quickly — in minutes versus hours — minimizes negative user experiences and avoids losses in users and revenue.
Change Failure Rate: This metric measures the percentage of changes that result in failure in production. A lower change failure rate means teams are able to deliver changes that are less likely to cause problems in production, reducing the risk of downtime and customer impact.
Why are DORA metrics important?
DORA metrics are important because they provide a way for organizations to measure and improve their DevOps practices.
Firstly, the metrics provide a common language for teams to identify and discuss areas for improvement while aligning goals. Because they focus on outcomes, DORA metrics also help teams better understand how their software development and delivery processes are impacting customers and mark areas for improvement. They essentially provide a framework for teams to continuously improve their processes through incremental changes. Finally, DORA metrics are aligned to DevOps principles of continuous delivery and integration.
How can teams use DORA metrics?
DORA metrics are only as good as the action plans built around them. Teams should use DORA metrics to set targets for improvement and track progress over time. If a team wants to reduce their lead time or changes from three weeks to one, they can track their lead time over time and see if they are making progress, making adjustments as needed. Assessing change failure rate can help teams investigate things when the number is too high. It may signal that they need to look at testing practices, for instance, to find ways to improve coverage or automate the process to reduce the lead time. Similarly, it can find areas for increased efficiency beyond development teams. If it’s a lead time problem, development and operations teams can come together to identify ways to streamline the deployment process. We also can’t forget celebrating success. DORA metrics are an easy way to note how teams are going above and beyond!
What are the challenges of DORA metrics?
Despite their benefit, there are some common challenges associated with DORA merrics to look out for:
Misalignment with the business: DORA metrics don’t always align with business goals. A team may want to focus on improving customer satisfaction or increasing revenue, which is not necessarily what DORA metrics track. It’s key that DORA metrics are used in conjunction with other metrics that align to business goals.
Difficulty measuring: Some metrics like lead time for changes can be easier to measure than mean time to restore (MTTR). More than an assessment and understanding, certain metrics will require additional tooling and processes that may add complexity and cost.
Hard to interpret: DORA metrics provide quantitative data, but it doesn’t always simply translate into insight or action. Just because lead time to change has improved, it’s not going to be clear why or whether the change has had a net positive for users.
Over reliance on metrics: Perhaps the easiest pitfall to fall in is becoming overly focused on the metrics rather than the underlying processes. This can mean focusing on optimizing the metrics at the expense of overall quality, reliability, and user experience.
How will DORA metrics evolve?
The future is bright for metrics in general, which will only continue to be better surfaced and contextualized for the ultimate impact. We can expect DORA metrics to be closer aligned with security, integrated with machine learning and AI, and even expand into domains beyond development.
It’s hard to predict exactly how DORA metrics will evolve, but we can be sure that certain trends will be part of the process. While DORA metrics are focused on software development and delivery, there may be a closer focus on the outcomes — customer satisfaction, revenue impact or other business metrics. We can also expect DORA-type metrics to get more granular and provide better insight into discrete aspects of DevOps like testing. Additionally, the type and amount of metrics that are being observed and utilized is only set to grow. Organizations must make sure their tooling builds from DORA paradigms, and continues to improve the efficacy and efficiency of their software development practices.