The Engineering Manager's Guide to Developer Metrics
Engineering managers live in a measurement paradox: leadership wants data on team performance, developers fear being reduced to numbers, and the metrics that are easiest to collect (lines of code, commit count) are the least meaningful. This guide helps you navigate that paradox: which metrics actually matter, how to use them without destroying trust, how to present data to leadership, and how to build a measurement practice that helps your team improve instead of making them defensive.
2-minute setup • No credit card required
The Metrics That Matter
After a decade of research (DORA, SPACE, DX), the industry has converged on a clear set of meaningful metrics. For delivery performance: deployment frequency, lead time for changes, change failure rate, and MTTR (the DORA four). For operational health: PR throughput, code review turnaround, and cycle time. For team health: developer satisfaction (survey-based), meeting load, and on-call burden. For risk: bus factor (knowledge concentration). These metrics share three properties: they're outcome-oriented (measuring results, not activity), they're resistant to gaming (improving the metric actually improves the outcome), and they're measurable from existing systems (git, CI/CD, incident tools, surveys). Metrics that fail these tests — lines of code, hours worked, commit count — should be avoided entirely.
Key takeaway
Focus on DORA metrics for delivery, operational metrics for bottlenecks, and satisfaction surveys for team health. Avoid activity metrics like LOC and commit count.
The Golden Rule: Team-Level, Not Individual
The single most important principle: use metrics at the team level for process improvement, not at the individual level for performance ranking. The moment you rank developers by PR count or review speed, you've created a system that optimizes for gaming instead of outcomes. Developers will split PRs artificially, approve reviews without reading them, and avoid complex work that produces fewer countable outputs. Individual metrics have one valid use: self-improvement. A developer who sees their own review turnaround is 3x slower than the team average can choose to prioritize reviews. This is empowering, not threatening. But that data should never appear in a team-wide ranking or performance review. Frame all metrics as 'how is the team performing?' not 'how is each developer performing?'
Key takeaway
Team-level metrics for management decisions. Individual metrics for personal self-improvement only. Never rank developers by metrics.
Using Metrics in 1:1s and Reviews
Metrics can enrich 1:1 conversations when used carefully. Good use: 'I noticed our team's cycle time increased this sprint. What do you think is causing it?' This invites the developer into problem-solving at the team level. Bad use: 'Your PR count was lower than the team average this month.' This reduces the developer to a number and invites defensiveness. In performance reviews, reference outcomes, not activity: 'You led the payments refactor that reduced checkout errors by 40%' is based on impact. 'You merged 15 PRs' is based on activity and means nothing without context. Git data can help you prepare for reviews by reminding you what the developer actually shipped (Gitmore's reports are useful here), but the review conversation should focus on impact, growth, and contribution — not metric dashboards.
Key takeaway
In 1:1s, use metrics to discuss team-level trends. In reviews, discuss outcomes and impact, not activity counts.
Presenting Metrics to Leadership
Leadership wants to know three things: Are we shipping? Are we reliable? Can we sustain this pace? Map your metrics to these questions. Shipping: deployment frequency (how often), lead time (how fast), PR throughput (how much). Reliability: change failure rate (how stable), MTTR (how resilient), uptime. Sustainability: developer satisfaction, headcount vs. attrition, on-call burden. Present trends, not snapshots: 'Deployment frequency improved from 5 to 12 per week this quarter after we automated our release process' tells a story. '12 deployments this week' is a number without context. Always pair metrics with narrative. Numbers without explanation create anxiety ('is 12 good or bad?'). Narrative without numbers lacks credibility ('trust us, we're shipping faster'). Together, they build confidence in engineering as a strategic investment.
Key takeaway
Map metrics to leadership's questions (shipping? reliable? sustainable?). Present trends with narrative, not raw numbers.
Building Trust Around Measurement
The biggest risk with developer metrics isn't collecting the wrong data — it's losing your team's trust. Developers who feel surveilled will disengage, game metrics, or leave. Build trust through transparency: share the same dashboard with the team that you share with leadership. No hidden metrics, no secret reports. Ask the team which metrics they think are useful and let them influence the measurement approach. When metrics reveal a problem (declining throughput, rising cycle time), diagnose it as a team in the retrospective — don't use it as a 'gotcha.' Show the team that metrics lead to process improvements (faster CI, better review rotation, fewer meetings) rather than punishment. It takes 3-6 months to build a healthy metrics culture. Early wins help: when the team sees that tracking review turnaround led to a policy that freed 5 hours per week, they become metrics advocates.
Key takeaway
Share the same data with your team and leadership. Use metrics to drive process improvements, not performance reviews. Trust takes 3-6 months to build.
Common Mistakes and How to Avoid Them
Measuring too many things: start with 5 metrics, not 20. Dashboard fatigue is real — if you track everything, nobody looks at anything. Comparing teams to each other: a platform team and a product team have fundamentally different output profiles. Compare teams to their own history. Using metrics as a substitute for management: metrics reveal patterns, but understanding why requires conversation. A dip in throughput could mean tech debt, unclear requirements, or a key person being out sick. The metric shows the symptom; 1:1s and retros reveal the cause. Ignoring lagging indicators: if throughput is high but satisfaction is low, you have a burnout problem that will show up in attrition 6 months from now. Always pair activity metrics with well-being metrics. Not acting on data: the worst outcome is measuring everything and changing nothing. Each metric should connect to a specific improvement lever that someone owns.
Key takeaway
5 metrics not 20. Compare teams to themselves. Use metrics as conversation starters, not conclusions. Always pair activity with satisfaction.
Expert advice
Automate metric collection with Gitmore — if collecting data requires manual effort, you'll stop doing it within 2 months
Share your engineering dashboard with the entire team, not just leadership. Transparency builds trust and helps the team self-correct
In every retrospective, reference one metric. This normalizes data-driven discussion without making metrics feel like surveillance
When a metric looks bad, ask 'what changed in our process?' before asking 'who caused this?' Systems produce results, not individuals
Developer satisfaction is the leading indicator that predicts everything else. Survey quarterly at minimum
Common questions
Should I share individual developer metrics with each developer?
Yes — but only with that individual, never with others. A developer seeing their own review turnaround or PR size distribution is useful for self-improvement. The same data shared in a team ranking is harmful. Tools like Gitmore show individual data to the individual and team aggregates to the manager.
How do I measure productivity for senior engineers?
Senior engineers often have lower activity metrics (fewer PRs, fewer commits) because their impact comes through design reviews, mentoring, architecture decisions, and unblocking others. Measure their impact through qualitative assessment in 1:1s and by tracking the outcomes of their technical decisions, not PR counts.
What if the team pushes back on metrics?
Pushback usually means fear of surveillance. Address it directly: 'These are team-level metrics for process improvement. We'll never rank individuals by numbers. Here's the dashboard — you can see exactly what I see.' Then demonstrate trust by using metrics to improve their work life (fewer meetings, faster CI, better review process), not to judge them.
How often should I review metrics with the team?
Weekly operational metrics (PR throughput, review turnaround) in the team Slack channel. Sprint-level metrics (velocity, cycle time) in the retrospective. Monthly trends (DORA, satisfaction) in the team all-hands or leadership report. Don't review metrics daily — it creates anxiety over normal variation.
Automate Your Git Reporting
Stop compiling reports manually. Let your code speak for itself with automated daily and weekly reports.
Get Started FreeNo credit card • No sales call • Reports in 2 minutes