sdlcnext.com
← All posts
ai-agents moonshots eric-schmidt recursive-self-improvement ai-infrastructure robotics

Moonshots Ep. 241: Eric Schmidt on the 92-Gigawatt Wall and the San Francisco Consensus

Eric Schmidt at Abundance360 says we're 10 to 15 percent into AI's impact, recursive self-improvement is still an open scientific problem, and America's binding constraint is 60 nuclear plants worth of electricity it does not have.


Viewpoint

“We’re 10 or 15% into the impacts of this,” Eric Schmidt told the Abundance360 audience. Read that twice. The man who ran Google through the entire pre-transformer era is saying the version of AI that is currently rewriting white-collar work is the warm-up.

The framing sets the discount rate on every other claim in the conversation. If you think we are 80% of the way through and the curve flattens from here, the deployed reasoning agents already in production are most of the prize. If you think we are at 10 to 15%, those deployed agents are bait, and the headroom is two orders of magnitude away.

Schmidt is in the second camp. He is also clear that recursive self-improvement, the thing that would actually put us on the steeper slope, is still an open scientific problem. “Real recursive self-improvement is the following,” he said. “Start now, learn everything, discover things, and tell me what you learned. That query doesn’t work yet.”

AI impact curve with the 10 to 15 percent marker and the recursive self-improvement headroom

The San Francisco consensus

Schmidt has a phrase for what he calls “the San Francisco consensus.” Everyone he knows in the Bay Area believes this year is the year of agents. Within that year, the scaling of agent use grows at a rate limited only by electricity. Once recursive self-improvement lands, the system improves itself faster than humans can biologically keep up with, and that is the superintelligence moment. The consensus puts that moment two to three years out.

The proof point Schmidt cites is Claude Code. “Everyone I know in the Bay Area that’s doing software says it was 80/20, now it’s 20/80.” Software development inverted in months. The human went from doing the work and using the model as autocomplete to writing the spec and reading the results at breakfast.

He told the story of a young programmer he had just met. The kid writes the spec, writes an evaluation function, and turns it on at seven in the evening. He has dinner with his wife. He goes to sleep. The job finishes at four in the morning. Schmidt’s reaction: “This stuff would have taken me six months and ten programmers at Google to do the same thing. This poor guy’s sleeping.”

The implication Schmidt drew next is the one most teams are not ready to absorb. A small number of very large companies, plus a very large number of very small companies, because you do not need as many people. Programmers are not going away, but the ones who survive are the ones who can think in parallel orchestrators rather than line-by-line authors.

The 92-gigawatt wall

This is the part of the conversation that should make every CFO sit up. In his congressional testimony, Schmidt put America’s electricity shortfall through 2030 at 92 gigawatts. A nuclear plant is roughly 1.5 gigawatts. That is 60 nuclear plants America has not built and does not currently have permission to build.

The math gets uglier. A gigawatt of AI infrastructure runs about $50 billion of hardware, software, and data centers. 100 gigawatts is $5 trillion over five years. Data center buildout already accounts for 1% of US GDP growth, and the current estimate is that 10% of all American electricity will be flowing into data centers within a few years.

The standard “efficiency will save us” rebuttal, where better algorithms do more with less, runs straight into Jevons Paradox. “As the algorithms become more efficient, you don’t need less power. You need even more, because we discover new uses.” Every cost-per-token improvement Anthropic and OpenAI ship makes the demand curve steeper, not flatter.

Which is why data centers in space stopped being a meme and started being a board agenda item. Schmidt, who is a part owner of Relativity Space, declared the heat dissipation problem technically understood. The remaining question is the business case. When ground-based power becomes the binding constraint, effectively infinite solar input in orbit starts to pencil out.

Stacked breakdown of the 92-gigawatt shortfall, showing 60 nuclear plants and the $5 trillion capital requirement

What DeepMind actually bought Google

The best capital-allocation story in the conversation is one Schmidt has told before, but it lands differently in a 92-gigawatt context. Google bought DeepMind for $600 million. Everyone thought it was crazy. Then DeepMind paid for the entire acquisition by optimizing air conditioning in Google’s data centers. Before AlphaGo. Before AlphaFold. Just by running better cooling models.

Then AlphaGo happened. Schmidt was in Seoul watching the win-probability monitor climb. “It starts at 50/50. I go watch the screen and it goes to 51%. And then 52%. And then David, who is the architect, says, ‘Well, we just plan for it to get to infinity.’ Boom.”

Then Demis Hassabis took the same team and pointed it at protein folding. AlphaFold did in an hour what used to take a PhD student four years, roughly 300 million times more efficient at the science that underwrites every drug discovery program on the planet.

The lesson Schmidt drew is the one that matters for anyone allocating capital today. “You actually have to understand the game.” Pick domains with clean evaluation functions. Hand them to teams patient enough to be ignored. Redeploy those teams when the first problem is solved. That is the playbook that produced the AI Google is running now. Most companies do not have the patience for step three.

Annotated timeline of Google's DeepMind bet, from $600M acquisition through cooling, AlphaGo Seoul, and AlphaFold

China is the competitor we are not allowed to lose to twice

Schmidt repeats a phrase deliberately. China is “the competitor, not the enemy.” He thinks the distinction matters, and he thinks America already lost it once.

“With respect to robotics, we somehow decided it was okay for them to dominate the electric vehicle industry. This was an error. Spend some time outside of this country in Chinese cars, trust me. They are real competitors.”

The robotics argument flows directly from the EV one. A robot is stepper motors plus a brain. The factories that build EVs already build the motors at scale, and the brutal Chinese competition culture sharpens the cost curve faster than a board-dinner-driven Western counterpart can. Watch Unitree’s dancing robot. That is the low end of the market closing.

Why is the US response vertically integrated at companies like Tesla and Figure? Because there is no vendor ecosystem. “I have no choice.” When you cannot buy stepper motors from a robotics supply chain, you build the supply chain. Schmidt is in the rocket business now and watching the same dynamic at Relativity Space. The majority of a rocket’s cost is high-skill human assembly that current robots cannot replicate, because the tolerances are physical and the judgement happens on the spot. That ceiling will fall, but not for a long time.

Two-column relationship diagram contrasting China's EV-derived robotics supply chain with US vertical integration at Tesla and Figure

The counter-read

Schmidt’s caveats deserve more weight than the consensus gives them. He spent a week doing recursive self-improvement reviews. His verdict: “The scientists do not agree on the exact approach yet. There are tests in the lab that show it, but they show it in limited cases that are kind of demos.” That is the chair of an informal AI-governance group describing the foundational claim of the San Francisco consensus as not yet working.

The Chernobyl framing also lands harder than most coverage notes. “It may take such a tragedy, hopefully a small one, to awaken the world.” Schmidt is not endorsing the idea that a modest AI catastrophe is a feature. He is describing what he thinks it will take to get China and the United States in the same room. That is a different and more uncomfortable claim than most safety discourse will admit. If your alignment plan requires a tragedy, your alignment plan is a hope, not a plan.

Schmidt’s actual ask, the one buried under the consensus headlines, is for historians, ethicists, governance experts, and political scientists to be in the room before ASI lands. Not instead of engineers. With them. The system America builds should reflect American values, by which Schmidt means freedom of speech and association, the things a fourteen-year-old American learns in elementary school. The concrete near-term test is the one he keeps returning to. “It is not okay for thirteen-year-olds to be committing suicide because of an LLM. It’s just not okay.” If a society cannot hold that line, it does not get to hold any of the harder ones.


Sources

Comments

Loading comments…

Leave a comment