· 6 min read · Engineering Measurement

The AI Adoption Visibility Problem: How Do You Know Who's Actually Using AI?

You bought Copilot seats for your entire team. Some engineers doubled their output. Others didn't change at all. You can't tell which is which. Here's how to fix that.

Here's a conversation I've had with at least a dozen CTOs across Headline's portfolio in the last six months:

"We rolled out Copilot to the whole team. Some people love it. Some people ignore it. I have no idea what the actual impact is."

The details vary — sometimes it's Cursor instead of Copilot, sometimes Claude Code, sometimes a mix — but the frustration is identical. They've invested in AI tools. They believe the tools work. They cannot prove it.

This is the AI adoption visibility problem, and it's one of the most pressing issues in engineering leadership right now.

The Shadow AI Problem

The first challenge is that AI adoption is invisible by default.

There's no centralized dashboard that shows you which engineers are using AI tools, how they're using them, and whether it's working. Your tool vendor might tell you how many seats are active, but seat utilization tells you nothing about effectiveness. An engineer who opens Copilot once a day and ignores every suggestion counts the same as one who's using it to ship twice as fast.

Meanwhile, engineers are adopting AI tools in completely different ways:

The power users have restructured their workflow around AI. They use agentic tools for scaffolding, iterate with AI on implementation, and spend their time on architecture and code review. Their output has transformed.

The dabblers use AI occasionally — accepting a suggestion here, generating a test there. They're marginally faster but haven't fundamentally changed how they work.

The skeptics tried AI tools, found them unhelpful for their specific domain or workflow, and stopped. Or they never tried.

The shadow adopters are using AI tools that the company didn't provision — personal ChatGPT accounts, Claude via browser, tools the company doesn't even know about.

As a leader, you can't see any of this. Surveys don't help — engineers self-report optimistically or defensively depending on the political winds. Tool vendor reports measure activity, not productivity. Manager observation is anecdotal and biased.

Why Mandates Don't Work

The instinct is to mandate AI usage: "Everyone must use Copilot. It's company policy." This fails for three reasons:

You can't force effective adoption. An engineer can have Copilot open and ignore every suggestion. Mandate compliance, and you get compliance theater. The tool is running. The workflow hasn't changed.

Different work benefits differently. An engineer writing CRUD endpoints gets massive value from AI tools. An engineer debugging distributed systems race conditions gets less. Mandating the same tool usage for both misses the point.

Resistance hardens. Engineers who are pushed toward tools they don't find helpful become more resistant, not less. The mandate becomes a source of resentment rather than a catalyst for adoption.

What works instead is making the results of AI adoption visible. Not tracking whether engineers have the tool open — tracking whether their output has changed.

The Indirect Signal

Here's the insight: you don't need to measure AI usage directly. You need to measure output, and AI adoption shows up in the data.

When an engineer starts using AI tools effectively, a predictable pattern emerges in their velocity data:

Week 1-2: Velocity stays roughly the same. They're learning the tool — figuring out what it's good at, what prompts work, how to integrate it into their existing workflow. There's a learning overhead that temporarily offsets any gains.

Week 3-4: Velocity increases 20-40%. They've integrated AI into routine tasks. Boilerplate generation, test scaffolding, and pattern-based implementation are now faster. The workflow is starting to shift.

Month 2+: Velocity stabilizes at a new, higher baseline. They're not just doing the same work faster — they're taking on more complex work because AI handles the implementation details. The nature of their contribution has changed.

This pattern is visible at the individual level in velocity data. You don't need to know which tool they're using or how often they open it. You can see the result in what they ship.

What We Saw at Headline

When we tracked this internally, the data told a clear story.

Our team's aggregate productivity nearly doubled from August to November 2025. But the aggregate masked significant individual variation:

The early adopters showed the velocity increase pattern described above. Their output ramped over 3-4 weeks and then stabilized at a new baseline 40-80% higher than before.

The gradual adopters showed a slower ramp. Their velocity increased over 6-8 weeks as they experimented with different tools and workflows. The eventual increase was similar — they just took longer to get there.

The non-adopters showed flat velocity over the same period. Not declining — just flat. They were doing the same work at the same pace. In absolute terms, nothing was wrong. But relative to their AI-adopting peers, a gap was forming.

The most surprising finding: some junior engineers showed the largest velocity increases. AI tools had compressed the implementation skill gap between junior and senior engineers. Juniors who adopted AI aggressively could ship complex work that would have been beyond their reach without AI assistance. They were still learning architecture and system design, but the raw implementation was no longer a bottleneck.

Without output measurement, none of this would have been visible. We'd have had anecdotes and gut feelings. Instead, we had data.

From Visibility to Action

Visibility is only useful if it drives action. Here's what becomes possible when you can actually see AI adoption patterns:

Targeted coaching, not blanket mandates. Instead of "everyone must use AI," you can identify which engineers would benefit from support and pair them with power users. The engineer whose velocity is flat might just need a pairing session showing them how agentic tools work on their specific type of work.

Investment validation. "Is our AI investment paying off?" becomes an answerable question. You can see the aggregate velocity trend before and after tool rollout. You can quantify the output increase. You can report to your board with actual numbers instead of vendor-provided seat counts.

Team composition insights. If some engineers are shipping at 2x velocity with AI tools and others are flat, that informs resourcing decisions. Not to punish anyone — but to understand capacity realistically. Two AI-enabled engineers might produce the output of three without AI. That changes how you plan projects and allocate people.

Adoption strategy refinement. You can see which teams adopted AI fastest and investigate why. Was it the team lead? The type of work? The specific tool? Data lets you replicate what works rather than guessing.

The Anti-Pattern: Surveillance Instead of Signals

I want to be clear about what this isn't.

This isn't keystroke logging to see when engineers type vs. when Copilot generates. This isn't screenshot monitoring to see which tools are open. This isn't tracking how many AI suggestions are accepted.

All of those are surveillance. They measure behavior, not output. They destroy trust and generate noise.

What we're describing is measuring the same thing we'd measure regardless of AI: what code shipped and how complex was it? The AI adoption signal is a natural byproduct of output measurement. If an engineer's shipped output increases over a few weeks, something changed. If that change correlates with AI tool availability, you have a reasonable inference.

The engineer doesn't need to report their AI usage. Their manager doesn't need to watch them work. The code speaks for itself.

The Window for Action

Here's the uncomfortable truth: the gap between AI-adopting and non-adopting engineers is widening every month. The tools are getting better. The engineers who use them are getting better at using them. The engineers who don't are falling further behind.

Six months from now, the gap will be harder to close. A year from now, it might define who's on your team.

You can't fix a problem you can't see. Step one is visibility: objective, output-based measurement that shows you what your team is actually shipping. AI adoption patterns emerge naturally from that data.

The CTOs I talk to aren't asking whether AI tools matter. They're asking how to know whether their investment is working. The answer isn't in tool vendor dashboards. It's in what your team actually ships.


GitVelocity measures engineering velocity by scoring every merged PR using AI. AI adoption shows up naturally in the output data — no surveillance required.

See how it works.

Conrad Chu
Written by Conrad Chu

Conrad is CTO and Partner at Headline, where he leads data-driven investment across early stage and growth funds with over $4B in AUM. Before becoming an investor, he founded Munchery (raised $130M+) and held engineering and product leadership roles at IAC and Convio (IPO 2010). He and the Headline engineering team built GitVelocity to help engineering organizations roll out agentic coding and measure its impact.