Skip to main content
Back to Blog

How to Track Developer Activity with Git Reports

February 20, 2026·12 min read·Gitmore Team

It's Monday morning. Your PM needs a sprint update by noon. You open Github, click through 14 pull requests across 3 repositories, cross-reference Jira tickets, check who reviewed what, try to remember if that hotfix from Thursday ever got deployed, and 45 minutes later you have a rough summary that's already incomplete because someone pushed code to Gitlab and you forgot to check there.

This is the reality for most engineering managers. Developer activity is scattered across platforms, repositories, branches, and communication channels. The data exists, but assembling it into something useful is a manual, time-consuming, and error-prone process.

This guide covers how to build a proper developer activity tracking system using automated git reports: what metrics actually matter, why common approaches fail at scale, and how modern tools turn raw git data into actionable engineering intelligence.


The Problem with Raw Git Data

Git is the single source of truth for what happened in your codebase. Every commit, branch, merge, and tag is recorded with timestamps and author information. In theory, you have everything you need. In practice, turning git log output into a meaningful team report is like reading raw database rows and calling it analytics.

Here's what git data alone does not tell you:

  • Context of the change: A commit message like "fix auth bug" doesn't tell you it was a critical security patch that blocked a release
  • Review bottlenecks: You can't see from git log that a PR sat in review for 4 days because the only person who could review it was on vacation
  • Work categorization: Was the team spending 70% of their time on bug fixes or new features? Raw commits don't tell you
  • Risky operations: Direct pushes to protected branches, force-pushes, or commits bypassing the PR process are invisible unless you actively watch for them
  • Non-code contributions: PR reviews, design discussions, code comments, and documentation work are invisible in commit history

The gap between "data exists" and "data is useful" is where most teams get stuck. This is exactly the gap that git reporting tools are designed to fill.


Defining Developer Activity: The Metrics That Actually Matter

"Developer activity" is a broad term that gets misused frequently. Counting commits per day is not activity tracking. It's vanity metrics. A developer who writes 2 commits that refactor a critical authentication system is contributing more than someone who pushes 20 commits of copy changes.

Meaningful developer activity tracking covers multiple dimensions:

1. Commit Activity and Code Changes

This is the foundation. But the value isn't in counting commits. It's in understanding what was changed and why. A good git report should aggregate commits by developer, group them by feature or area, and surface the overall narrative: "Sarah spent the week on the payment integration, touching 12 files across 3 services. Most changes were in the Stripe webhook handler and the order confirmation flow."

Key signals to track from commits:

  • Files changed and their location in the codebase (frontend, backend, infra, tests)
  • Commit message patterns and quality (are they descriptive or just "wip"?)
  • Frequency and timing (are commits happening in bursts before deadlines, or steadily?)
  • Branches being worked on and their relationship to planned features

2. Pull Request Lifecycle

Pull requests are where the real engineering workflow happens. A PR is not just a code change request. It's a unit of work that goes through creation, review, iteration, and merge (or rejection). Each stage produces valuable data:

  • Time to first review: How long does a PR sit before someone looks at it? If this number is consistently above 24 hours, you have a review bottleneck
  • Review cycles: How many rounds of changes-requested does a typical PR go through? More than 2 rounds often signals unclear requirements or misaligned architecture decisions
  • Time to merge: The total time from PR creation to merge. This is your engineering throughput metric. For most teams, anything above 3 days for a standard feature PR is a problem
  • PR size: Large PRs (500+ lines changed) are statistically more likely to introduce bugs and take longer to review. Tracking average PR size helps teams maintain discipline around small, reviewable changes
  • Stale PRs: PRs that have been open for more than a week with no activity are often forgotten work. Surfacing these prevents waste

3. Code Review Participation

Review activity is one of the most underrated indicators of team health. In many teams, 1-2 senior developers do 80% of the code reviews, creating a single point of failure and a constant bottleneck. Tracking review distribution reveals:

  • Who is reviewing and how often
  • Average review response time per reviewer
  • Whether reviews are substantive (comments, suggestions) or rubber-stamped approvals
  • Review load balance across the team

A team where reviews are evenly distributed and response times are under 4 hours is a team that ships fast. A team where one person reviews everything and takes 2 days is a team with a bus factor of 1.

4. Work Categorization and Allocation

One of the most powerful things a git reporting tool can do is automatically categorize work. By analyzing commit messages, PR descriptions, and file changes with AI, reports can break down activity into categories:

  • New features: Greenfield development, new endpoints, new UI components
  • Bug fixes: Patches, hotfixes, regression fixes
  • Refactoring: Code cleanup, architecture improvements, tech debt reduction
  • Infrastructure: CI/CD changes, deployment configs, monitoring setup
  • Testing: New tests, test infrastructure, coverage improvements
  • Documentation: READMEs, API docs, inline comments

This is critical for engineering leaders. If your team is spending 60% of their time on bug fixes and only 15% on new features, that tells you something important about code quality, technical debt, and planning accuracy. Without categorization, you're flying blind.


How Automated Git Reports Work

A git reporting tool connects to your Github, Gitlab, or Bitbucket repositories and processes activity data through several layers:

Data Collection Layer

The tool receives webhook events from your git provider whenever something happens: a commit is pushed, a PR is opened, a branch is merged. This is real-time, event-driven data collection. No polling, no cron jobs, no delays. The tool only reads event metadata (commit messages, PR titles, author info, timestamps). It never accesses your actual source code.

AI Analysis Layer

Raw webhook data is then processed by an LLM that performs several tasks:

  • Summarization: Converts a list of 20 commits into a 2-sentence summary of what was accomplished
  • Categorization: Labels each piece of work as feature, fix, refactor, infra, docs, etc.
  • Critical flags: Surfaces important events like direct pushes to production branches, force-pushes, or other activity that may need immediate attention

Report Generation and Delivery

The processed data is compiled into a structured report and delivered on your schedule. Most teams use one of two patterns:

  • Daily digest: A morning summary of yesterday's activity, delivered to Slack. This gives the team shared context so everyone starts the day aligned without needing to ask around
  • Weekly summary: A comprehensive report delivered Friday afternoon or Monday morning with full sprint-level visibility. This is what CTOs and VPs of Engineering typically want

What a Good Git Report Actually Contains

A useful git report is not a list of commits. It is a structured narrative of engineering activity that different people can read at different levels of detail. Here's what the best reports include:

Team Summary

A 3-4 sentence overview of what the team accomplished. Something like: "The team merged 23 PRs this week across 5 repositories. Major work included completing the Stripe payment integration (7 PRs), fixing 4 production bugs in the notification service, and starting the new admin dashboard. Review throughput was healthy with average time-to-merge at 18 hours."

Per-Developer Breakdown

For each developer, a summary of their contributions. Not commit counts, but actual descriptions: what they worked on, what they reviewed, what they merged. This is the section that gives managers instant clarity on who is working on what.

Work Distribution

A breakdown showing how engineering time was allocated: 45% new features, 25% bug fixes, 15% infrastructure, 10% refactoring, 5% documentation. Over time, these numbers reveal trends. If bug fixes are trending upward sprint over sprint, you have a quality problem that needs addressing.

PR Health Metrics

Open PRs, average review time, merge rate, stale PRs, and review load distribution. These are leading indicators: if review times start creeping up, you'll see delivery slow down 1-2 weeks later. Catching it early lets you rebalance before it becomes a problem.

Highlights and Alerts

Notable events that warrant attention: a large PR that might need extra scrutiny, a repository with no activity (is the project stalled?), an unusual number of force-pushes (is someone rewriting history?), or a developer who submitted code but received zero reviews in 48 hours.

The goal is not to monitor individuals. It's to give the team a shared, accurate picture of what happened so that conversations shift from "what are you working on?" to "how can I unblock you?"


Using Git Reports for Different Roles

For Engineering Managers

Daily reports replace the need to manually check Github every morning. You open Slack, read the digest, and know exactly what shipped yesterday, what's in review, and what's blocked. No more chasing people for updates. Weekly reports give you sprint-level visibility to share with product and leadership without spending Friday afternoon writing summaries. See how engineering managers use Gitmore.

For CTOs and VPs of Engineering

Weekly reports across all repositories give you the executive view: overall velocity, work distribution trends, team health indicators, and cross-team dependencies. You can answer "how is the Q1 initiative going?" with data instead of asking 4 team leads for subjective updates.

For Tech Leads

Reports help you track review load distribution, identify PRs that need your attention, and understand how the codebase is evolving. If a junior developer is making changes to a critical service, the report surfaces that early so you can offer guidance before the code reaches production.

For Remote and Async Teams

When your team spans 3+ timezones, synchronous meetings for status updates are a scheduling nightmare. Automated git reports solve this completely: everyone gets the same report at the same time, regardless of timezone. The report becomes the single source of truth for team activity, and async discussions can reference it directly. How async teams prepare for standups with automated reports.


Advanced Patterns: Beyond Basic Reporting

Trend Analysis Over Time

A single report is a snapshot. The real value comes from comparing reports over weeks and months. Is your team's velocity increasing, stable, or declining? Are review times getting better or worse? Is the ratio of new features to bug fixes moving in the right direction? These trends are impossible to spot manually but become obvious when you have consistent, automated data collection.

Per-Repository Reports

Each report covers a single repository, which keeps the data focused and actionable. If your team works across multiple repos (frontend, backend, infra), you set up a report for each one. This way, the backend team gets a report relevant to them, and the frontend team gets theirs, without noise from unrelated activity.

Custom Automations and Alerts

Beyond scheduled reports, you can set up event-driven automations. For example: notify the team lead when a PR has been open for more than 72 hours with no review, send a summary when a protected branch receives a push, or generate a report when a milestone is completed. This turns your reporting tool into an engineering operations layer.


Setting It Up

Most git reporting tools, including Gitmore, connect to your git provider via OAuth. No code access required, no SSH keys, no YAML files, no CI/CD pipeline changes. You authenticate, select your repositories, configure your schedule and delivery channel (Slack or email), and your first report arrives within hours.

For a complete walkthrough, see our step-by-step setup guide.


The Visibility vs. Surveillance Line

Any conversation about tracking developer activity needs to address this directly. There is a meaningful difference between visibility and surveillance, and it comes down to intent and application.

Visibility means the whole team has shared context about what's happening. Everyone sees the same report. It's used to coordinate, unblock, and celebrate. When a report shows that someone's PRs haven't been reviewed, the response is "let me review that today," not "why isn't this done?"

Surveillance is when managers use activity data to judge individual performance based on metrics like commit count or lines of code. This is counterproductive. It incentivizes gaming the metrics rather than doing meaningful work.

The best teams share reports openly, use them as a communication tool rather than an evaluation tool, and focus on team-level patterns rather than individual metrics. The goal is always the same: less time reporting, more time building. See how productivity reports work in practice.

Explore git reporting for your platform

Try Gitmore for free

Automated git reports for your engineering team. Set up in 2 minutes, no credit card required.

Get Started Free