How to Implement DORA Metrics: A Practical Guide for Engineering Teams
DORA metrics are the industry standard for measuring software delivery performance, but going from 'we should track DORA' to actually having reliable data is harder than it looks. This guide covers the practical side: where to get the data, which tools to use, how to establish baselines, how to set realistic improvement targets, and how to avoid the common mistakes that make DORA measurement fail.
2-minute setup • No credit card required
Understanding the Four Metrics
The four DORA metrics measure two dimensions: speed (deployment frequency and lead time for changes) and stability (change failure rate and mean time to recovery). The breakthrough insight from DORA research is that these aren't tradeoffs — elite teams score high on all four. You can deploy frequently AND have low failure rates. Each metric requires different data sources: deployment frequency comes from your CI/CD pipeline or git tags, lead time comes from correlating git commit timestamps with deployment timestamps, change failure rate requires linking incidents to the deployments that caused them, and MTTR comes from your incident management system. Before implementing, make sure you have access to these data sources.
Key takeaway
Each DORA metric needs a different data source. Map your data sources before choosing tools — you can't measure what you can't observe.
Setting Up Data Collection
The biggest implementation challenge is reliably connecting deployment events to code changes. For deployment frequency: if you use CI/CD (GitHub Actions, GitLab CI, Jenkins), count successful production deployments from pipeline runs. If you deploy manually, tag each deployment in git. For lead time: you need to link a commit's timestamp to the deployment that included it — most git analytics tools do this by tracking PRs from creation to merge, then merge to deploy. For change failure rate: connect your incident management tool (PagerDuty, Opsgenie, Incident.io) to your deployment log. Mark which incidents were caused by a deployment. For MTTR: track incident duration from detection to resolution timestamps in your incident tool. The simplest starting point: use a tool like Gitmore for deployment frequency and lead time (derived from git data), and manually track CFR and MTTR from your incident log.
Key takeaway
Start with deployment frequency and lead time — they can be fully automated from git data. Add CFR and MTTR manually, then automate as your incident tracking matures.
Establishing Your Baseline
Before setting improvement targets, measure where you are today. Collect 4-6 weeks of data before drawing conclusions — shorter periods are too noisy. Calculate each metric and map it to the DORA tier: Elite, High, Medium, or Low. Most teams are surprised by their results: they think they deploy frequently but the data shows once per week, or they think lead time is 2 days but it's actually 5 when you include review wait time. The baseline is not a judgment — it's a starting point. Share it with the team as 'here's where we are, here's what the benchmarks say, what should we improve?' The team's input on which metric to prioritize is critical for buy-in.
Key takeaway
Measure for 4-6 weeks before acting. Share baselines with the team as a starting point for discussion, not as a report card.
Setting Improvement Targets
Set one metric as your quarterly improvement target, not all four. Trying to improve everything at once dilutes focus. Choose the metric with the highest impact and most actionable bottleneck. If lead time is your biggest issue and it's caused by slow code reviews, the target is clear: 'Reduce median review turnaround from 24 hours to 8 hours, which should reduce lead time from 5 days to 2 days.' Targets should be specific and achievable in one quarter. Moving from DORA Low to Medium in one metric per quarter is a realistic pace. Jumping from Low to Elite in a quarter is not — that requires fundamental process changes that take time to implement and stabilize. Review targets monthly and adjust if assumptions were wrong.
Key takeaway
Pick one metric per quarter. Set a specific, achievable target tied to a clear bottleneck. Moving up one DORA tier per quarter is a realistic pace.
Common Pitfalls to Avoid
The most common DORA implementation failure: measuring without acting. Teams set up dashboards, check them once, and never change anything. The fix: tie DORA metrics to specific process improvements with owners and deadlines. Second pitfall: using DORA metrics to compare teams. Story points aren't fungible across teams, and neither are DORA metrics — a team maintaining a legacy monolith has fundamentally different constraints than a team building a new microservice. Compare a team to its own history, never to other teams. Third pitfall: optimizing the metric instead of the outcome. If you incentivize deployment frequency, teams might deploy tiny changes to inflate the number. If you incentivize lead time, they might skip code review. Always measure multiple dimensions so gaming one metric shows up as degradation in another.
Key takeaway
Tie metrics to action plans, compare teams only to themselves, and measure multiple dimensions to prevent gaming.
Building a DORA Practice Long-Term
DORA metrics are most valuable as a long-term practice, not a one-time measurement. Build them into your regular cadence: review metrics in sprint retrospectives (use them to ground discussion in data), include them in monthly engineering reports (show leadership the trend, not just a snapshot), and reference them in quarterly planning (use bottleneck analysis to prioritize engineering investments). Over time, extend beyond the four core metrics: add developer satisfaction surveys (SPACE framework), PR throughput, and code review turnaround as operational leading indicators. The four DORA metrics tell you how the delivery pipeline performs; these additional metrics tell you why and help you predict where problems will emerge before they impact DORA scores.
Key takeaway
Make DORA a habit: weekly in retros, monthly in leadership reports, quarterly in planning. Extend with operational metrics over time.
How to get started
Map Your Data Sources
Identify where deployment events, code changes, incidents, and recovery timestamps live in your toolchain. You need: git platform (GitHub/GitLab/Bitbucket), CI/CD system, and incident management tool.
Connect a Git Analytics Tool
Set up Gitmore or similar to automatically track deployment frequency and lead time from your git data. This gives you two of the four metrics with zero manual effort.
Start Tracking Incidents
For each production incident, record: which deployment caused it, when it was detected, and when service was restored. Even a spreadsheet works to start. This gives you CFR and MTTR data.
Baseline for 4-6 Weeks
Collect data without trying to improve anything. Calculate each metric and map to DORA tiers. Share with the team.
Set One Quarterly Target
Pick the metric with the most actionable bottleneck. Set a specific target (e.g., 'reduce lead time from 5 days to 2 days'). Assign process changes to achieve it.
Review Monthly, Iterate Quarterly
Check metrics monthly to track progress. At quarter end, evaluate the target, pick the next metric to improve, and set new targets.
Expert advice
Deployment frequency and lead time can be measured from git data alone — start there before tackling CFR and MTTR which require incident data
Don't let perfect be the enemy of good: a spreadsheet tracking incidents by deployment is better than no CFR measurement at all
Share DORA metrics in your engineering all-hands — transparency builds buy-in and lets teams learn from each other's improvement strategies
If lead time is your bottleneck, measure review turnaround separately — it's usually the largest contributor and the most actionable
Use Gitmore for automated deployment frequency and lead time tracking — it calculates these from your existing git workflow with no additional configuration
Common questions
How long does it take to implement DORA metrics?
Basic setup (deployment frequency + lead time from git data): 1 day. Full four-metric tracking with incident integration: 1-2 weeks. Reliable baselines: 4-6 weeks after setup. First meaningful improvement cycle: one quarter. Most teams see actionable data within the first month.
Do we need to buy a DORA metrics tool?
Not necessarily. You can calculate DORA metrics from existing data: CI/CD logs for deployment frequency, git timestamps for lead time, incident records for CFR and MTTR. But a tool like Gitmore automates collection and visualization, which is the difference between 'we measured once' and 'we track continuously.'
What if our DORA scores are bad?
That's normal and expected. Most teams start at Medium or Low. The value of DORA isn't having good scores — it's knowing where to improve. A team that measures Low and improves to Medium in a quarter has accomplished more than a team that's naturally High but doesn't know why.
Should we report DORA metrics to leadership?
Yes — but with context. Report the trend (improving/stable/declining) and the action plan, not just the numbers. 'We improved deployment frequency from 3x/week to 8x/week by automating our release process' is a story leadership can understand and support.
Automate Your Git Reporting
Stop compiling reports manually. Let your code speak for itself with automated daily and weekly reports.
Get Started FreeNo credit card • No sales call • Reports in 2 minutes