The AI Adoption Curve: Why Most Teams Stall at Stage Four
- Sean Robinson
- 4 days ago
- 6 min read

Last month, I had three engineers sitting twenty metres apart. Same team. Same tools. Same access.
One was using AI for tab completion and the occasional "write me a test for this." Another had cut his review cycle roughly in half: custom prompts per task type, architecture docs fed in as context, outputs he could actually trust. The third had working pipelines triggering on PR events, handling first-pass review automatically, flagging drift from architectural decisions before it became anyone's problem.
Same organisation. Same manager, even.
That gap isn't a skills problem. It's an adoption problem, and the uncomfortable part is that it's almost completely invisible until you go looking for it. Nobody broadcasts which stage they're at. The person at stage two thinks they're doing well, because compared to six months ago, they are. They just don't know what stage five looks like from the inside.
I've started mapping this as an AI adoption curve — five stages, each with distinct characteristics, each with its own failure modes. Most organisations have people spread across all five right now. Most of the productivity numbers people are chasing live only at the end of it.
Stage 1: Spark
The tools arrive. ChatGPT. Copilot. Cursor. Whatever your organisation sanctioned, or more likely, whatever people quietly signed up for themselves. Around half of employees currently use AI tools that their employer hasn't approved (BlackFog, 2026). That number almost certainly skews higher in tech.
Nobody really knows what to do with it. People poke at it. Ask it to explain the code. Draft an email. Summarise a meeting. The outputs are impressive enough to generate excitement and inconsistent enough to generate scepticism in roughly equal measure, sometimes in the same conversation.
No shared practice exists. Knowledge is personal. What works for one person stays with that person. The fire metaphor is apt: you've got fire, but you're mostly just staring at it.
The hallmark of Spark is enthusiasm without direction. Everyone agrees it's interesting. Nobody agrees on what to actually do with it.
Stage 2: Drift
This is where most teams spend longer than they should. It's also where the first stagnation trap lives, and it's the more dangerous one precisely because it feels like progress.
Cohesion starts happening. People share tips. Someone figures out that more specific prompts get better results. Prompt engineering enters the vocabulary. A few people move from "write me a function" to "here's the context, here's our pattern, here's what I need." Their results improve noticeably.
The problem is that knowledge transfers through proximity rather than through structure. The person sitting next to the enthusiast improves. The team two floors up doesn't. Good practice spreads socially, not systematically, which means it spreads slowly and unevenly. When someone leaves, they take their prompts with them.
Context windows are a real issue at this stage that almost nobody talks about. People get inconsistent results and blame the tool, not realising they're starting from scratch every session with a model that has no memory of their architecture, their conventions, their previous decisions. The outputs feel random because the input discipline isn't there yet.
Nearly 80% of organisations surveyed by McKinsey report regular use of generative AI in at least one function (McKinsey State of AI, 2025). Most of them are in Stage 2 of the AI adoption curve. They feel like they're doing AI. They're mostly just doing faster tab completion.
Stage 3: Fracture
This is the most uncomfortable stage because you're aware enough to know something is off, but the path forward isn't obvious.
Pockets of genuine sophistication start appearing. Someone discovers context engineering: deliberately constructing what you feed the model, not just what you ask it.
Someone else starts using architectural decision records as live context, so the AI operates within your actual constraints rather than generic best practice. A few people begin experimenting with structured multi-agent approaches, treating AI as a set of specialist collaborators working through a problem in sequence.
The gap across the organisation widens fast, because the people who've found these practices are now materially more productive than those who haven't. This is Moore's chasm applied inward: not whether your organisation adopts AI, but whether your team can cross from surface-level use to substantive capability. Most can't without deliberate help, because nothing about "try this prompt" prepares anyone for "here's how to structure a context that makes the model reliably useful."
Agentic AI starts appearing here, too, but in isolation. Someone builds an impressive pipeline. It doesn't connect to anything. The team mistakes the prototype for the product and wonders why the productivity gains never materialise at scale. That disconnection is a specific failure mode worth naming, because it's everywhere right now.
Fracture is also where measurement breaks down. At Stage 2, the benefit is obvious: you wrote that faster. By Stage 3, AI is involved in enough things that you can't cleanly isolate its contribution. Leadership asks for numbers. Nobody has them. Investment stalls. Sometimes adoption regresses. McKinsey's 2025 data found that only 39% of organisations report any EBIT impact from AI at the enterprise level. That's largely a measurement failure, not a technology failure.
Teams that push through Fracture do it by converting personal practice into shared artefacts: prompt libraries, context templates, and documented patterns. Unglamorous work. Critical work.
Stage 4: Fixture
AI is now standard. It's in the process, in the tools, in the job expectations. Engineers use it for code review. Product managers set up context before writing stories. There's a shared prompt library. There are probably some agentic experiments running somewhere.
This feels like the destination. For most organisations, it's where the AI adoption curve stalls permanently.
The problem is that AI at this stage is still an add-on. Something you reach for rather than something woven into the workflow itself. The cognitive overhead of using it well remains high: you still have to remember to load the context, remember to run the check, and remember that the agentic pipeline exists. When things get busy, people skip it.
When a new team member joins, they learn that AI tools exist, but not how to use them properly, because the practice lives in people's heads rather than in the environment.
Only 7% of organisations have fully scaled AI across their operations, according to McKinsey's 2025 survey of nearly 2,000 respondents. The vast majority are here, in Stage 4, feeling like they've arrived while the compounding benefits they were promised remain stubbornly out of reach.
The 5x and 10x productivity numbers don't come from Fixture.
Stage 5: Fusion
This is where the curve bends.
The difference between Stage 4 and Stage 5 isn't better tools or more prompts. It's architecture. AI stops being something engineers use and becomes something the system does. Practices aren't remembered, they're enforced. Context isn't assembled manually; it's constructed automatically at the point of work. Agentic workflows aren't separate experiments; they're load-bearing parts of how work gets done.
In practice, this looks like: pre-populated context injected when work begins, not assembled by the developer. Review agents running on every PR, informed by your actual architectural decisions. Automated pattern detection flagging drift before it becomes debt. Documentation staying current because agents maintain it as a by-product of the work, not as a separate task.
Cognitive load drops significantly. Engineers stop spending mental energy on "am I using AI correctly" and spend it entirely on the actual problem. Junior developers operate closer to senior level because the guardrails and context are structural, not dependent on whoever sits nearby. Adherence to standards improves not because people are more disciplined, but because discipline is built into the environment.
McKinsey defines AI high performers as organisations where more than 5% of EBIT is attributable to AI, with significant reported value overall. That cohort represents about 6% of the 1,993 organisations surveyed. They are, almost without exception, operating at Stage 5. The single strongest predictor of their performance across 25 factors tested? Fundamental redesign of workflows, not adoption of better tools.
That's not a coincidence. That's the curve.
Where are you?
Honestly, most organisations are spread across Stages 2, 3 and 4, with a few individuals at Stage 5 who are largely frustrated that nobody around them is working the same way.
That's not a crisis. It's a diagnosis.
Stages 1 and 2 need shared practice, not more tools or training courses. Someone on your team is already doing this well. Find them. Make their approach structural rather than personal.
Stage 3 needs measurement and artefacts. Build the context templates. Document the patterns. Give the agentic experiments somewhere permanent to land rather than letting them die in a branch.
Stage 4 needs an honest conversation about whether "AI is standard here" means embedded or merely expected. One scales. The other doesn't.
Stage 5 requires intentional engineering work: treating AI integration the same way you'd treat any other infrastructure concern. It's not a sprint task. It's an architectural decision.
The organisations pulling ahead right now aren't using better models than everyone else. They're the ones where Stage 5 is no longer the exception.
That gap compounds every month.
Sources: McKinsey State of AI 2025 (mckinsey.com); BlackFog Shadow AI Research 2026 (blackfog.com)

Comments