How to Add a Git Reporting Tool to Your Repository
A step-by-step guide to connecting your Github, Gitlab, or Bitbucket repo to an automated reporting tool. Takes less time than making a sandwich.
February 20, 2026·12 min read·Gitmore Team
It's Monday morning. Your PM needs a sprint update by noon. You open Github, click through 14 pull requests across 3 repositories, cross-reference Jira tickets, check who reviewed what, try to remember if that hotfix from Thursday ever got deployed, and 45 minutes later you have a rough summary that's already incomplete because someone pushed code to Gitlab and you forgot to check there.
This is the reality for most engineering managers. Developer activity is scattered across platforms, repositories, branches, and communication channels. The data exists, but assembling it into something useful is a manual, time-consuming, and error-prone process.
This guide covers how to build a proper developer activity tracking system using automated git reports: what metrics actually matter, why common approaches fail at scale, and how modern tools turn raw git data into actionable engineering intelligence.
Git is the single source of truth for what happened in your codebase. Every commit, branch, merge, and tag is recorded with timestamps and author information. In theory, you have everything you need. In practice, turning git log output into a meaningful team report is like reading raw database rows and calling it analytics.
Here's what git data alone does not tell you:
The gap between "data exists" and "data is useful" is where most teams get stuck. This is exactly the gap that git reporting tools are designed to fill.
"Developer activity" is a broad term that gets misused frequently. Counting commits per day is not activity tracking. It's vanity metrics. A developer who writes 2 commits that refactor a critical authentication system is contributing more than someone who pushes 20 commits of copy changes.
Meaningful developer activity tracking covers multiple dimensions:
This is the foundation. But the value isn't in counting commits. It's in understanding what was changed and why. A good git report should aggregate commits by developer, group them by feature or area, and surface the overall narrative: "Sarah spent the week on the payment integration, touching 12 files across 3 services. Most changes were in the Stripe webhook handler and the order confirmation flow."
Key signals to track from commits:
Pull requests are where the real engineering workflow happens. A PR is not just a code change request. It's a unit of work that goes through creation, review, iteration, and merge (or rejection). Each stage produces valuable data:
Review activity is one of the most underrated indicators of team health. In many teams, 1-2 senior developers do 80% of the code reviews, creating a single point of failure and a constant bottleneck. Tracking review distribution reveals:
A team where reviews are evenly distributed and response times are under 4 hours is a team that ships fast. A team where one person reviews everything and takes 2 days is a team with a bus factor of 1.
One of the most powerful things a git reporting tool can do is automatically categorize work. By analyzing commit messages, PR descriptions, and file changes with AI, reports can break down activity into categories:
This is critical for engineering leaders. If your team is spending 60% of their time on bug fixes and only 15% on new features, that tells you something important about code quality, technical debt, and planning accuracy. Without categorization, you're flying blind.
A git reporting tool connects to your Github, Gitlab, or Bitbucket repositories and processes activity data through several layers:
The tool receives webhook events from your git provider whenever something happens: a commit is pushed, a PR is opened, a branch is merged. This is real-time, event-driven data collection. No polling, no cron jobs, no delays. The tool only reads event metadata (commit messages, PR titles, author info, timestamps). It never accesses your actual source code.
Raw webhook data is then processed by an LLM that performs several tasks:
The processed data is compiled into a structured report and delivered on your schedule. Most teams use one of two patterns:
A useful git report is not a list of commits. It is a structured narrative of engineering activity that different people can read at different levels of detail. Here's what the best reports include:
A 3-4 sentence overview of what the team accomplished. Something like: "The team merged 23 PRs this week across 5 repositories. Major work included completing the Stripe payment integration (7 PRs), fixing 4 production bugs in the notification service, and starting the new admin dashboard. Review throughput was healthy with average time-to-merge at 18 hours."
For each developer, a summary of their contributions. Not commit counts, but actual descriptions: what they worked on, what they reviewed, what they merged. This is the section that gives managers instant clarity on who is working on what.
A breakdown showing how engineering time was allocated: 45% new features, 25% bug fixes, 15% infrastructure, 10% refactoring, 5% documentation. Over time, these numbers reveal trends. If bug fixes are trending upward sprint over sprint, you have a quality problem that needs addressing.
Open PRs, average review time, merge rate, stale PRs, and review load distribution. These are leading indicators: if review times start creeping up, you'll see delivery slow down 1-2 weeks later. Catching it early lets you rebalance before it becomes a problem.
Notable events that warrant attention: a large PR that might need extra scrutiny, a repository with no activity (is the project stalled?), an unusual number of force-pushes (is someone rewriting history?), or a developer who submitted code but received zero reviews in 48 hours.
The goal is not to monitor individuals. It's to give the team a shared, accurate picture of what happened so that conversations shift from "what are you working on?" to "how can I unblock you?"
Daily reports replace the need to manually check Github every morning. You open Slack, read the digest, and know exactly what shipped yesterday, what's in review, and what's blocked. No more chasing people for updates. Weekly reports give you sprint-level visibility to share with product and leadership without spending Friday afternoon writing summaries. See how engineering managers use Gitmore.
Weekly reports across all repositories give you the executive view: overall velocity, work distribution trends, team health indicators, and cross-team dependencies. You can answer "how is the Q1 initiative going?" with data instead of asking 4 team leads for subjective updates.
Reports help you track review load distribution, identify PRs that need your attention, and understand how the codebase is evolving. If a junior developer is making changes to a critical service, the report surfaces that early so you can offer guidance before the code reaches production.
When your team spans 3+ timezones, synchronous meetings for status updates are a scheduling nightmare. Automated git reports solve this completely: everyone gets the same report at the same time, regardless of timezone. The report becomes the single source of truth for team activity, and async discussions can reference it directly. How async teams prepare for standups with automated reports.
A single report is a snapshot. The real value comes from comparing reports over weeks and months. Is your team's velocity increasing, stable, or declining? Are review times getting better or worse? Is the ratio of new features to bug fixes moving in the right direction? These trends are impossible to spot manually but become obvious when you have consistent, automated data collection.
Each report covers a single repository, which keeps the data focused and actionable. If your team works across multiple repos (frontend, backend, infra), you set up a report for each one. This way, the backend team gets a report relevant to them, and the frontend team gets theirs, without noise from unrelated activity.
Beyond scheduled reports, you can set up event-driven automations. For example: notify the team lead when a PR has been open for more than 72 hours with no review, send a summary when a protected branch receives a push, or generate a report when a milestone is completed. This turns your reporting tool into an engineering operations layer.
Most git reporting tools, including Gitmore, connect to your git provider via OAuth. No code access required, no SSH keys, no YAML files, no CI/CD pipeline changes. You authenticate, select your repositories, configure your schedule and delivery channel (Slack or email), and your first report arrives within hours.
For a complete walkthrough, see our step-by-step setup guide.
Any conversation about tracking developer activity needs to address this directly. There is a meaningful difference between visibility and surveillance, and it comes down to intent and application.
Visibility means the whole team has shared context about what's happening. Everyone sees the same report. It's used to coordinate, unblock, and celebrate. When a report shows that someone's PRs haven't been reviewed, the response is "let me review that today," not "why isn't this done?"
Surveillance is when managers use activity data to judge individual performance based on metrics like commit count or lines of code. This is counterproductive. It incentivizes gaming the metrics rather than doing meaningful work.
The best teams share reports openly, use them as a communication tool rather than an evaluation tool, and focus on team-level patterns rather than individual metrics. The goal is always the same: less time reporting, more time building. See how productivity reports work in practice.
Explore git reporting for your platform
Automated git reports for your engineering team. Set up in 2 minutes, no credit card required.
Get Started Free