The numbers from DX Research’s February 2026 study should feel uncomfortable. Across 121,000 developers at 450+ companies, 92.6% use an AI coding assistant at least once a month. Around 75% use one weekly. Yet the productivity improvement has been stuck at roughly 10%, and that number hasn’t moved in over a year.
Near-universal adoption, modest and plateauing impact. That combination is worth sitting with.
The numbers that tell the story
The plateau is not for lack of usage. Self-reported time savings sit at around 4 hours per developer per week, which sounds meaningful until you look at the trendline. In Q2 2025 it was 3.6-3.7 hours. In Q4 2025 it was still 3.6-3.7 hours. Adoption went up. Time saved did not.
At the same time, the share of AI-authored code reaching production keeps climbing, from 22% last quarter to 26.9% in Q1 2026, with daily AI users now having nearly a third of their shipped code written by AI. More code is being generated. The productivity needle isn’t moving.
More AI-written code, same productivity. What’s going on?
The 20% problem
AWS survey data offers one explanation. The average developer spends only about 20% of their time actually writing code. The rest, meetings, discovery, design, debugging, code review, planning, compliance, context switching, doesn’t get touched by a coding assistant.
AI has optimised a single slice of the workday and hit its ceiling there. The other 80% remains exactly as slow as it was before. A 50% improvement in code-writing speed applied to 20% of the workday gives you a 10% total productivity gain. That maths is not a coincidence.
The tools were built to autocomplete code. They are very good at that. But “how fast we write code” was never actually the bottleneck.
Where time is actually saved
When you ask developers who save at least an hour a week with AI to name the tasks responsible, the breakdown is:
- Stack trace analysis (~30%)
- Refactoring existing code (~27%)
- Inline completions (~25%)
- Test case generation (~24%)
- Learning new techniques (~19%)
Understanding and maintaining existing code dominates, not writing new code from scratch. The biggest wins are in comprehension and navigation, the part of work most developers find tedious and slow, not the part that makes engineering hard.
Initial scaffolding sits near the bottom at around 15%. The thing AI is most associated with in the public imagination is not what developers find most valuable in practice.
Adoption ≠ Impact
AI adoption has reached 93%. Productivity gain from AI is around 10%. AI-authored code in production is at 27%.
More code is shipping. Developers are using the tools constantly. The productivity of the overall system hasn’t changed much. This isn’t a failure of the tools. It’s a signal about what the tools were built to do.
The next question for software development is not how to get developers to use AI more. It’s what it would mean to actually address the bottlenecks that autocomplete doesn’t touch: the other 80%.
Nobody has a clean answer to that yet.
Data: DX Research — “Measuring Developer Productivity & AI Impact” (Feb 2026), 121,000 developers across 450+ companies. AWS Developer Survey.
The numbers from DX Research’s February 2026 study should be read carefully, because the most important finding is not what the headline says. Across 121,000 developers at 450+ companies, 92.6% use an AI coding assistant at least once a month. Productivity gains sit at around 10%, and that number hasn’t moved in over a year.
What the study is actually documenting is a technology that has found its ceiling.
The plateau is structural, not temporary
A productivity gain that doesn’t grow while adoption goes from 70% to 93% is not “early innings.” It is saturation. If the gains were compounding, if learning curves, improved models, and deeper integration were doing what boosters predicted, we would see the number move. It hasn’t moved for four consecutive quarters.
The most straightforward explanation is also the correct one: AI coding assistants have captured the gains available to them in their current form, and those gains were smaller than forecast.
Self-reported data deserves scepticism
The 4-hour weekly time savings figure is self-reported. Developer time-estimation is notoriously unreliable. Decades of research on the planning fallacy and psychological ownership effects suggest people overestimate the value of tools they have adopted, particularly expensive ones with status attached.
Objective productivity metrics, lines of code are flawed, but ticket throughput, cycle time, and bug rate can be measured, consistently show smaller gains than self-report. The DX survey methodology, which relies heavily on developer perception, is likely capturing some combination of real savings and confirmation bias.
A 10% figure arrived at through self-report should be treated as an upper bound, not a floor.
The 20% problem has no AI solution
The AWS finding that developers spend only 20% of their time writing code is positioned as a roadmap for future AI tools. The logic runs: AI addressed the 20%, so AI will address the 80%. This is not how bottlenecks work.
The other 80%, meetings, alignment, organisational decision-making, managing dependencies, navigating stakeholder constraints, is slow for reasons that are not technical. It is slow because organisations are complex, people disagree, priorities shift, and building shared understanding takes time. None of those problems are amenable to autocomplete. The suggestion that “agentic AI” will solve organisational friction is not supported by evidence; it is a pitch.
27% AI-authored production code is a quality question, not a success metric
The growth in AI-authored code reaching production is treated in most coverage as an indicator of increasing AI capability. It is also an unresolved liability.
What is the defect rate of AI-authored code relative to human-authored code in production? Can engineers actually understand and maintain it at scale? What does 30% AI-authored code look like in a codebase five years from now when the original context is gone?
These questions are not being systematically answered. The adoption of AI-generated code is outrunning the tooling and practices needed to understand its long-term quality implications.
Where this likely ends
The most probable outcome is that AI coding assistants become standard infrastructure: present everywhere, generating modest and stable productivity improvements, roughly analogous to IDE autocomplete a decade ago. Useful. Not transformative.
The gains available in the 20% of work that involves writing code have largely been captured. The remaining 80% will not be unlocked by better autocomplete, better models, or better agents, because the constraint is not technical capability. The constraint is that software development is a human coordination problem dressed in code.
Nobody has a clean answer to that yet, because there isn’t one.
Data: DX Research — “Measuring Developer Productivity & AI Impact” (Feb 2026), 121,000 developers across 450+ companies. AWS Developer Survey.
The numbers from DX Research’s February 2026 study are, depending on how you read them, either a disappointment or a proof of concept. Across 121,000 developers at 450+ companies, 92.6% use an AI coding assistant at least once a month. Productivity is up around 10%. Critics call this a paradox. It isn’t. It’s a baseline.
Ten percent is not a ceiling. It’s where you are when a technology is 18 months old and still primarily bolted onto existing workflows rather than reshaping them.
The 10% in context
A 10% productivity improvement across a 121,000-person sample is not a rounding error. It is an enormous aggregate number. If the average developer salary in this cohort is $150,000, a 10% productivity gain represents $15,000 of output per person per year. At scale, that compounds into billions of dollars of engineering capacity that didn’t previously exist.
The criticism that gains have “plateaued” confuses a measurement period with a structural limit. The plateau is in one specific tool category, inline coding assistants, not in AI-assisted development broadly. That category has hit a local maximum. The category that comes next has not.
The 20% insight is a roadmap, not a verdict
The AWS finding that developers spend only 20% of their time writing code is framed as a constraint. Read differently, it is a precise map of where the next $100B of AI investment is going.
Meetings, discovery, design, debugging, code review, planning: these are exactly the workflows that agentic AI systems are being built to enter. The reason gains are at 10% and not 40% is not that AI is limited; it’s that the tools available in 2024-2025 were point solutions for a single slice of the workday. That is changing faster than any previous enterprise software cycle.
The 20% problem is the product roadmap for the next three years.
27% of production code is AI-authored — and rising
The share of AI-authored code reaching production grew from 22% to 26.9% in a single quarter. That’s not a plateau metric. It’s a growth metric on an already large base. For daily AI users, nearly a third of shipped code is now AI-generated.
This matters beyond productivity statistics. The humans in this system are increasingly doing architecture, review, judgment, and direction, while AI handles implementation. That is not a marginal change. It’s a different industry emerging in real time.
What comes after autocomplete
The current generation of AI coding tools optimised for the easiest target: filling in code. They succeeded. The productivity gains are real, the adoption is real, and the plateau is real, because they solved the problem they were designed to solve.
The question “what addresses the other 80%?” already has answers in development. AI pair programmers that participate in design reviews. Agents that triage and characterise bugs before a human touches them. Tools that synthesise meeting outcomes into updated specs. Systems that review PRs with the context of the full codebase.
None of these are speculative. They are in closed beta and early access at large engineering organisations right now.
The paradox is that we are at 10% and calling it a ceiling, when the tools designed for the other 80% haven’t shipped yet.
Data: DX Research — “Measuring Developer Productivity & AI Impact” (Feb 2026), 121,000 developers across 450+ companies. AWS Developer Survey.