By David Brainard, CTO of EverQuote.
Every AI company is selling intelligence. None of them can keep it.
A frontier model leads for weeks, maybe months. Then a cheaper one catches up through distillation. OpenAI ships something big. Two months later, a model half the size matches it on most benchmarks. Anthropic ships Opus. The same cycle plays out. If your business depends on having the smartest model, you’re betting on a lead that evaporates on a schedule.
So where does the real lock-in come from?
Intelligence has a shelf life
Set aside AGI. If someone cracks it, all bets are off and none of this matters. Short of that, the pattern is predictable: a lab releases a frontier model, enjoys a brief lead, and then distillation compresses the gains into something smaller and cheaper. What took a year of catch-up in 2023 takes two months now.
This is the commoditization curve every hardware company has lived through. Intel led on clock speed for years. AMD caught up. ARM changed the game entirely. Model intelligence is on the same path, just faster. Being the best for a quarter doesn’t build a durable business.

Maybe the moat is in the software. Claude Code is genuinely good. The MCP protocol is clever. The integrations keep expanding: Gmail, Drive, Slack, your codebase, your calendar. Every new connector makes the experience stickier.
But the same tools Anthropic sells to help you write software can be turned back onto themselves. You can use Claude Code to fast-follow Claude Code. You can use an agentic coding pipeline to rebuild an agentic coding pipeline. The tools are not defensible because the tools make replication easy.
This is a strange property of AI developer tools that no other enterprise category shares. They’re self-undermining. The better they get at building software, the easier it becomes to build their replacement. Salesforce can’t be used to rebuild Salesforce. Claude Code could plausibly be used to rebuild Claude Code. Every competitor with API access to a frontier model can spin up a coding agent and start cloning features. The barrier to entry is a weekend and an API key.

The API gap nobody’s explaining
Engineers notice something odd. Claude’s inference through its integrated tools feels sharp, like talking to the best version of the model. Claude’s inference through the API, from what is ostensibly the same model, is noticeably worse.
This coincidence benefits Anthropic. If the API felt as good as the integrated experience, you’d have less reason to use their tools. You’d build your own. Whether the gap is intentional, a side effect of system prompts, or just a difference in context management doesn’t change the result: the walled garden outperforms the open field.
But even if Anthropic wins the tools war, even if every developer runs Claude Code and every executive lives inside Claude’s chat, that’s still thin lock-in. Cheaper models keep closing the gap. Eventually, you don’t need the premium integrated experience to get your work done. Switching is annoying, but possible.
Data gravity is the real play
The deeper move is owning your assets. Not just your prompts or your preferences, but your data. Your infrastructure. The ways you mutate your data, the ways you interact with your systems, the workflows you’ve built on top of their connectors.
If Anthropic gets there, or partners with someone who can, switching gets hard. Really hard.
Consider what they already touch through Claude Code and MCP integrations. Your codebase. Your commit patterns. Your architecture decisions. Your bug priorities. Now add your production data, your customer interactions, your internal documents. Each integration is another thread of gravity.
We learned this in the cloud era. Data gravity is real. Once your data lives somewhere, the cost of moving it is technical, organizational, and political. I’ve migrated clouds. It’s the kind of cost that makes a CTO say “let’s do it next quarter” for three years running.

Claude Cloude
Here’s the logical endpoint. If Anthropic has your data, can see how it changes, and can see the prompts you use to change it, they have a near-complete picture of your business. What you build. What you fix. What you prioritize. What you ignore.
At that point, they basically are your business.
It would not surprise me if Anthropic or OpenAI moves toward becoming a cloud provider. A platform that hosts your data, runs your workloads, and mediates your interaction with your own infrastructure. Not an AI company that also does cloud. A cloud provider that happens to have the best AI, because it trained on everything you did inside its walls.
That is lock-in.
We accepted this deal before
From a CTO’s perspective, this might not be the wrong move. We’ve been here before.
Most of us eventually accepted lock-in with a cloud provider. We made peace with it because we understood the trade. We offloaded the undifferentiated work: servers, patches, scaling. None of that was our competitive advantage. We handed it over because what we kept was products and features. The value lived above the abstraction layer, and that made the lock-in tolerable.

An intelligence layer is different. When the AI provider sees your prompts, your data, and your decision patterns, the abstraction layer moves up. Way up. What does differentiation look like when your AI vendor understands your business as well as you do?
I don’t have the answer. But every business should be working through that question before the lock-in is already in place.
The argument that AI companies will become cloud providers through data gravity is neat, logical, and potentially wrong. It assumes the cloud playbook transfers cleanly to AI. That assumption deserves more scrutiny than it gets.
Intelligence commoditizes, but that cuts both ways
The observation that frontier models depreciate quickly is correct. But this works against the lock-in thesis, not for it. If intelligence commoditizes rapidly, then the AI provider itself is interchangeable. You’re not locked into Anthropic because their model is best. You’re using whichever model works this month. That’s the opposite of lock-in.
The hardware comparison (Intel to AMD to ARM) actually tells a different story than intended. The winners in that cycle weren’t the companies with the best chips. It was companies that built ecosystems: Apple with its silicon-software integration, NVIDIA with CUDA’s developer network. Neither succeeded through data gravity. They succeeded through developer investment and architectural coupling. If AI lock-in happens, it’s more likely to come through those channels than through data hosting.

“Claude Code could rebuild Claude Code” sounds alarming. In practice, coding assistants are good at generating boilerplate and connecting APIs. They are not good at replicating the product judgment, testing infrastructure, reliability engineering, and user research that make a developer tool production-grade.
If self-replication were that easy, we’d already see dozens of Claude Code clones dominating the market. We don’t. The tooling moat isn’t the code itself. It’s the compound effect of usage data, user feedback loops, and the iteration speed that comes from a large active user base. That’s a network effect, and network effects are real moats, even if the underlying code is theoretically reproducible.

The API gap has simpler explanations
The observation that Claude feels better through integrated tools than through the API is real. But attributing it to strategic intent misses more likely explanations. Integrated tools ship with carefully tuned system prompts, pre-configured context windows, and optimized retrieval pipelines. The API gives you a raw model and expects you to do that work yourself.
This is the same gap between using Photoshop and calling the ImageMagick CLI. The product is better than the primitive. That doesn’t mean Adobe is deliberately degrading ImageMagick to trap you into Creative Cloud. The simpler explanation, that building a good product layer on top of a model takes engineering effort, is also the more plausible one.
Data gravity requires data hosting
Here’s the structural problem with the Claude Cloude thesis. Anthropic doesn’t host your data. Cloud providers achieved data gravity because they stored your data. Your databases ran on their servers. Your files lived in their object stores. Moving that data had real bandwidth costs and migration complexity.
Anthropic sees your data transiently. It passes through during inference and, in most configurations, isn’t retained. Observing your commit patterns through a Claude Code session is not the same as hosting your production database. Gravitational pull requires mass, and transient inference doesn’t accumulate mass the way persistent storage does.
For the Claude Cloude thesis to work, Anthropic would need to leap from inference provider to infrastructure provider. That’s not an extension of their current business. It’s a completely different business, and it’s one where AWS, Azure, and GCP have a fifteen-year head start, hundreds of billions in invested capital, and deep enterprise relationships.

CTOs learned from the cloud era
The argument assumes CTOs will sleepwalk into AI lock-in the way they sleepwalk into cloud lock-in. That’s a questionable assumption. The cloud migration era taught an entire generation of technical leaders exactly how data gravity works and what it costs to escape.
Multi-cloud strategies exist specifically because of this lesson. Most enterprises now run workloads across two or more providers. Multi-LLM strategies are following the same pattern: companies route prompts through different models depending on task, cost, and quality requirements. The abstraction layers that prevent single-vendor dependency (LiteLLM, LangChain, custom routers) are already standard practice in serious engineering organizations.
The generation of CTOs making AI adoption decisions today grew up watching colleagues get trapped by cloud vendors. They’re not likely to repeat the same mistake through the same mechanism.

The more likely outcome
The Claude Cloude thesis works if AI companies can transition from inference providers to infrastructure providers while maintaining their intelligence advantage. That’s two hard things at once. It requires winning in a market (cloud infrastructure) where they have no competitive advantage, against incumbents who have spent a decade and hundreds of billions building the dominant platforms.
The more probable path is that AI inference becomes a commodity service running on existing cloud providers. That’s already happening. AWS Bedrock, Azure OpenAI, and GCP Vertex AI host multiple frontier models today. The cloud providers may end up owning the AI lock-in, not the other way around. The gravity already lives in their data centers.
The AI moat debate is over. Intelligence commoditizes. Tooling self-replicates. The only defensible position is data gravity, and the companies that don’t see it coming are already locked in.
Intelligence was never the play
Frontier models depreciate like luxury cars. Full price today, half tomorrow. OpenAI ships Opus-class reasoning. Two months later, a distilled model matches it on most benchmarks at a fraction of the cost. This cycle has repeated five times since 2023, and the catch-up window keeps shrinking. Anyone still betting on model superiority as a business strategy is ignoring public data.
The hardware analogy is generous. Chip performance cycles lasted years. Model cycles last weeks. By the time a procurement team finishes evaluating one frontier model, the next one has already undercut it.

Claude Code is excellent. That’s exactly the problem. The same tools that make developers productive make it trivial to clone the tooling itself. An agentic coding pipeline can rebuild an agentic coding pipeline. Every improvement to AI developer tools lowers the barrier to replicating those exact tools.
No other enterprise software category has this vulnerability. Salesforce can’t be weaponized against itself. Oracle can’t generate its own replacement. But a competitor with an API key and a free weekend can spin up a Claude Code alternative. The MCP integrations, the Gmail and Slack connectors, the calendar sync: these are features, not moats. They’re the kind of features that get cloned in hackathons.

There’s a tell hiding in plain sight. Claude through its integrated tools feels sharper than Claude through its API. Same model, noticeably different experience. Whether this comes from intentional system prompting, optimized context management, or a lucky architectural choice doesn’t matter. The effect is clear: the integrated experience is better, and that keeps you inside the ecosystem.
This works today. It won’t work forever. Cheaper models keep closing the quality gap. The moment the open-API experience crosses the “good enough” threshold, the integrated advantage evaporates. Anthropic knows this. That’s why their real strategy must be something deeper.
The real strategy is already in motion
Look at what Anthropic already sees through its current integrations. Your codebase. Your commit history. Your architecture decisions. Your bug priorities. Your documents. Your emails. Your calendars. Your chat logs. Each MCP connector is another data pipeline flowing into their systems.
This is data gravity, and we know how it plays out because we lived through the cloud version. Once your data lives on someone else’s infrastructure, the cost of leaving becomes technical, organizational, and political. I’ve migrated clouds. Every CTO who has done it will tell you the same thing: it takes years, costs millions, and half the time you end up staying anyway.
The difference here is speed. Cloud migration took companies years to reach painful depth. AI integrations achieve the same depth in months. The data accumulation is faster, the workflow dependencies are stickier, and the switching costs compound daily.

Claude Cloude is not hypothetical
If Anthropic has your data and can see the prompts you use to change it, they have a near-complete model of your business. What you build, fix, prioritize, and ignore. They don’t just serve your business. They are a working replica of it.
The logical next step is becoming a cloud provider. Host the data. Run the workloads. Mediate every interaction between you and your own infrastructure. Not an AI company with cloud features. A cloud provider built on the deepest possible understanding of your business, because it watched you build it from the inside.
That is lock-in that makes AWS look optional by comparison.

The clock is already running
Every integration you enable, every workflow you build on their connectors, every document you route through their models adds gravitational mass. The switching cost compounds with each passing week.
The CTO who says “we’ll evaluate alternatives next quarter” is running the same playbook that locked entire industries into AWS for a decade. Except this time, you’re not handing over commodity infrastructure. You’re handing over your decision-making patterns and your competitive intelligence.
The question isn’t whether Claude Cloude is coming. It’s whether your organization understands what it’s trading before the gravity becomes inescapable.
Comments
Loading comments…
Leave a comment