sdlcnext.com
← All posts
AI strategy product software-engineering

Don't Use AI to Build Faster. Use It to Learn Faster.

Every productivity study in this series points to the same ceiling: modest gains on existing codebases, instability risks, plateauing returns. The real opportunity isn't optimisation. It's experimentation.


Viewpoint

Work through the data from the previous posts and a pattern holds. AI adoption is near-universal. Productivity gains have plateaued around 10%. AI-written code increases delivery instability. The tools work better for junior developers than senior ones. And the biggest single win, faster onboarding, is real but bounded.

These are actual gains. They’re just not the gains that will determine which companies win the next decade.

The real opportunity from AI isn’t making your existing product 10% faster to build. It’s cutting the cost of learning what to build in the first place.

The J-curve hypothesis

Brynjolfsson’s argument is that we are in the investment phase: organisations have adopted the technology, productivity looks flat at the developer level, and the harvest phase with accelerating returns is still ahead. The macro data from 2025 is at least consistent with this. US productivity growth hit 2.7%, nearly double the previous decade’s average. Q4 GDP came in at 3.7% with strong output despite slower job growth.

But the developer-level data from DX, DORA, and METR doesn’t show a harvest phase arriving yet. The most likely explanation is that the macro gains are coming from non-coding applications (customer service, content production, operations) rather than software development specifically. The 2026 numbers will be more decisive.

Either way, the J-curve raises the right question: if the gains so far are modest, what should you actually be optimising for?

The J-curve hypothesis

Two ways to point AI at your organisation

Most organisations deploying AI are pointing it at the efficiency problem: how do we build the things we have already decided to build, faster and cheaper?

This is the obvious application of a tool that accelerates code production. It produces the 10% gains the data shows. It also carries the DORA stability risks. And it hits a ceiling, because the constraint on building what you have already decided to build is rarely code generation speed.

The alternative is pointing AI at the discovery problem: how do we find out, faster and cheaper, whether the thing we are thinking about building is actually the right thing?

That changes the economics of experimentation. Historically, running a software experiment has been expensive. You need to spec, design, build, deploy, and measure something before you know if it was worth building. That cost imposes a filter. Only high-confidence bets clear the bar. Low-confidence, speculative ideas don’t get built because the downside of being wrong is too high.

AI collapses that cost. A prototype that would have taken six weeks to build now takes a week. A hypothesis that would have required a full sprint to test can be running in a day. More bets clear the bar. You run more experiments. You find out what works faster.

Two ways to point AI at your organisation

What this means in practice

The DORA data is clear that AI increases instability in mature production systems. That instability is genuinely harmful in a system you are maintaining. It is irrelevant in a system you are testing.

Throwaway prototypes are supposed to be unstable. MVPs are supposed to be rough. The whole point of a disposable experiment is that when it fails (and most experiments fail) you learn something cheaply and move on. DORA’s instability finding is not a problem when the code is disposable by design.

So the allocation question becomes concrete. In your production codebase, apply what the research says: careful review, spec-driven approaches, measurement. AI in mature codebases requires more oversight, not less. On the edges, new product lines, adjacent ventures, speculative features, the instability penalty doesn’t apply. That’s where you let AI run.

The companies that win in an AI-saturated market won’t be the ones who shipped their roadmap 10% faster. They’ll be the ones who figured out their next product 10x cheaper.

Where to let AI run vs where to hold it back

The closing argument

Every data point in this series tells the same story. AI’s gains in existing codebases are real but modest, plateau fast, and come with stability costs. That is the honest summary of what the research shows as of February 2026.

But the same research that shows a 10% productivity ceiling on existing codebases also shows that AI can cut onboarding time in half, a gain that compounds for years. It shows that experienced developers who use AI for exploration rather than execution find the highest time savings. It shows that the organisations seeing the best outcomes used AI at the system level, not just the individual task level.

The ceiling is on optimisation. The floor on discovery hasn’t been found yet.

Don’t use AI to build faster. Use it to learn faster.


Sources: Erik Brynjolfsson, Stanford Digital Economy Lab (Feb 2026). Bureau of Labor Statistics. DX Research (Feb 2026). Google DORA 2024/2025. METR RCT (2025).