For Engineering Managers
GitVelocity gives engineering managers something they have never had before: an objective, consistent measure of the complexity their teams ship. Used well, it replaces gut-feel conversations with data-backed ones. Used poorly, it becomes another metric people optimize against instead of doing their best work.
This page covers how to roll out GitVelocity to your team and get the most out of it.
Getting Started with Your Team
Before sharing GitVelocity with anyone, set yourself up for success:
- Connect your repositories and run a historical backfill. Three or more months of data is recommended so you have enough history for trends to be meaningful.
- Look at the data yourself first. Spend time exploring the dashboard, reviewing individual PR scores, and understanding what gets high and low scores on your team. You should be able to explain the scoring system before anyone else sees it.
- Understand the scoring system. Read through How Scoring Works and The Six Dimensions so you can answer questions from your team. Knowing how the Effort Scale Factor works is especially important -- it comes up often.
Rolling Out to Engineers
Once you understand the data, share GitVelocity with your team:
- Let engineers see their own scores. Transparency is the foundation. Engineers can see exactly which PRs were scored, what dimensions contributed to each score, and what was excluded. This visibility builds trust.
- Encourage feedback early. Ask your team to review their scores and flag anything that looks off. Does a trivial PR have an unexpectedly high score? Does a complex piece of work seem underscored? That feedback is valuable.
- Get input on what should and should not be counted. Engineers know better than anyone which files are boilerplate, which PRs are reverts, and which patterns are auto-generated. Their input makes the scoring more accurate for your specific codebase.
The goal is to make GitVelocity a tool the team finds useful, not something imposed on them.
Customizing Scoring for Your Team
GitVelocity's Review Settings let you fine-tune what gets scored:
- Revert PRs. Engineers can tag PRs with a "revert" label, and these will be excluded from scoring. Reverts are not meaningful complexity work and should not count.
- Auto-generated and cogen files. Use gitignore-style patterns to exclude files that are generated by tools -- think lock files, schema dumps, or codegen output. These inflate scores without reflecting real engineering effort.
- Other exclusions. If your team identifies additional patterns that are noise rather than signal, add them to the exclusion list.
Getting engineer feedback on exclusions is critical. They know what is boilerplate versus real work in your codebase. You can also customize the scoring prompt itself, but it is better to gather team input before making changes -- the default prompt works well for most teams.
Understanding What Gets Scored
Two important details to communicate to your team upfront:
Only PRs merged to the default branch are scored. GitVelocity watches for merges to main (or whatever your primary branch is). PRs merged to feature branches are not scored.
Stacked or chained PRs have specific behavior. If your team uses stacked PRs (where PR A merges into PR B, which merges into PR C, which finally merges into main), only that final merge to main gets scored. The intermediate merges are not evaluated. This means the combined score of the final merge will not equal what the individual PRs would have scored separately. In practice, this does not meaningfully affect anyone's performance picture when looking at trends over time, but it is worth explaining so engineers understand what they see in the data.
See the FAQ for a more detailed discussion of stacked PR scoring.
Focus on Trends, Not Individual Scores
A single PR score tells you very little. A 12 might be a quick but critical bug fix. A 55 might be a sprawling refactor that was overdue. Neither number, in isolation, says anything about the engineer who wrote it.
What matters is the trend. Look at rolling averages over weeks and months:
- An engineer whose average complexity is steadily climbing is taking on harder problems.
- A team whose total velocity is declining may be blocked by technical debt, organizational overhead, or unclear requirements.
- Month-over-month patterns reveal true performance pictures. Top performers and those needing support naturally stand out in the trends.
Do not set score targets. Scores measure the complexity of work that ships, not performance. If you tell engineers to hit a score target, they will optimize for score instead of optimizing for the best way to solve the problem. A team that consistently ships low-complexity PRs might be doing exactly the right thing -- small, focused changes that are easy to review and low-risk to deploy.
Use scores as conversation starters in 1:1s, not as grades. "I noticed your complexity has been lower the past two weeks -- what have you been working on?" opens a productive conversation. Maybe the engineer was onboarding, doing code reviews, or unblocking teammates with small but critical fixes. The score gives you something concrete to discuss; the conversation gives you the full picture.
Never use a low score as evidence of low performance. Scores do not capture code reviews, mentorship, design work, incident response, or any of the other ways engineers create value.
Achievements and Leaderboards
GitVelocity includes achievements and leaderboards that can help build positive team dynamics:
- Use achievements to recognize consistency and specializations. Achievements highlight sustained patterns -- not just one-off high scores. An engineer who consistently ships quality work in a specific area earns recognition for that sustained contribution.
- Leaderboards highlight sustained output. They reward engineers who deliver consistently over time, not just those who land a single large PR. This encourages better habits than chasing individual high scores.
- Build team morale. When used well, achievements and leaderboards create friendly motivation and help the team celebrate wins together.
Frequently Asked Questions
Should I use scores in performance reviews?
Use trends, not individual scores. A single PR score is meaningless in isolation. But rolling averages over weeks and months reveal real patterns -- engineers taking on harder problems, teams improving their output, or capacity issues that need attention. Scores are a conversation starter, not a grade.
What if my team pushes back?
Start by letting engineers see their own data and ask questions. Transparency builds trust. Emphasize that scores measure complexity, not performance or value. Get their input on what should be excluded (reverts, codegen files, lock files). When engineers feel ownership over the tool, resistance drops.
How long before trends are meaningful?
Run a historical backfill of 3+ months if possible. Without backfill, plan on 4-6 weeks of forward data before patterns emerge. Week-to-week noise is normal -- month-over-month trends are where the real signal lives.