sdlcnext.com
← All posts
ai cloud lock-in anthropic commoditization data-gravity

Intelligence Is Not a Moat. Your Data Is.

AI hyperscalers can't defend on model quality or developer tools. The real play is data gravity, and it looks a lot like becoming a cloud provider.


Viewpoint

By David Brainard, CTO of EverQuote.

Every AI company is selling intelligence. None of them can keep it.

A frontier model leads for weeks, maybe months. Then a cheaper one catches up through distillation. OpenAI ships something big. Two months later, a model half the size matches it on most benchmarks. Anthropic ships Opus. The same cycle plays out. If your business depends on having the smartest model, you’re betting on a lead that evaporates on a schedule.

So where does the real lock-in come from?

Intelligence has a shelf life

Set aside AGI. If someone cracks it, all bets are off and none of this matters. Short of that, the pattern is predictable: a lab releases a frontier model, enjoys a brief lead, and then distillation compresses the gains into something smaller and cheaper. What took a year of catch-up in 2023 takes two months now.

This is the commoditization curve every hardware company has lived through. Intel led on clock speed for years. AMD caught up. ARM changed the game entirely. Model intelligence is on the same path, just faster. Being the best for a quarter doesn’t build a durable business.

Frontier model distillation cycle: lead, compress, catch up, repeat

Tooling eats itself

Maybe the moat is in the software. Claude Code is genuinely good. The MCP protocol is clever. The integrations keep expanding: Gmail, Drive, Slack, your codebase, your calendar. Every new connector makes the experience stickier.

But the same tools Anthropic sells to help you write software can be turned back onto themselves. You can use Claude Code to fast-follow Claude Code. You can use an agentic coding pipeline to rebuild an agentic coding pipeline. The tools are not defensible because the tools make replication easy.

This is a strange property of AI developer tools that no other enterprise category shares. They’re self-undermining. The better they get at building software, the easier it becomes to build their replacement. Salesforce can’t be used to rebuild Salesforce. Claude Code could plausibly be used to rebuild Claude Code. Every competitor with API access to a frontier model can spin up a coding agent and start cloning features. The barrier to entry is a weekend and an API key.

Three layers of defensibility: intelligence, tooling, and data

The API gap nobody’s explaining

Engineers notice something odd. Claude’s inference through its integrated tools feels sharp, like talking to the best version of the model. Claude’s inference through the API, from what is ostensibly the same model, is noticeably worse.

This coincidence benefits Anthropic. If the API felt as good as the integrated experience, you’d have less reason to use their tools. You’d build your own. Whether the gap is intentional, a side effect of system prompts, or just a difference in context management doesn’t change the result: the walled garden outperforms the open field.

But even if Anthropic wins the tools war, even if every developer runs Claude Code and every executive lives inside Claude’s chat, that’s still thin lock-in. Cheaper models keep closing the gap. Eventually, you don’t need the premium integrated experience to get your work done. Switching is annoying, but possible.

Data gravity is the real play

The deeper move is owning your assets. Not just your prompts or your preferences, but your data. Your infrastructure. The ways you mutate your data, the ways you interact with your systems, the workflows you’ve built on top of their connectors.

If Anthropic gets there, or partners with someone who can, switching gets hard. Really hard.

Consider what they already touch through Claude Code and MCP integrations. Your codebase. Your commit patterns. Your architecture decisions. Your bug priorities. Now add your production data, your customer interactions, your internal documents. Each integration is another thread of gravity.

We learned this in the cloud era. Data gravity is real. Once your data lives somewhere, the cost of moving it is technical, organizational, and political. I’ve migrated clouds. It’s the kind of cost that makes a CTO say “let’s do it next quarter” for three years running.

Lock-in spectrum: from switchable intelligence to immovable data gravity

Claude Cloude

Here’s the logical endpoint. If Anthropic has your data, can see how it changes, and can see the prompts you use to change it, they have a near-complete picture of your business. What you build. What you fix. What you prioritize. What you ignore.

At that point, they basically are your business.

It would not surprise me if Anthropic or OpenAI moves toward becoming a cloud provider. A platform that hosts your data, runs your workloads, and mediates your interaction with your own infrastructure. Not an AI company that also does cloud. A cloud provider that happens to have the best AI, because it trained on everything you did inside its walls.

That is lock-in.

We accepted this deal before

From a CTO’s perspective, this might not be the wrong move. We’ve been here before.

Most of us eventually accepted lock-in with a cloud provider. We made peace with it because we understood the trade. We offloaded the undifferentiated work: servers, patches, scaling. None of that was our competitive advantage. We handed it over because what we kept was products and features. The value lived above the abstraction layer, and that made the lock-in tolerable.

Cloud era vs AI era: what you trade and what you keep

An intelligence layer is different. When the AI provider sees your prompts, your data, and your decision patterns, the abstraction layer moves up. Way up. What does differentiation look like when your AI vendor understands your business as well as you do?

I don’t have the answer. But every business should be working through that question before the lock-in is already in place.

Comments

Loading comments…

Leave a comment