“We’re 10 or 15% into the impacts of this,” Eric Schmidt told the Abundance360 audience. Read that twice. The man who ran Google through the entire pre-transformer era is saying the version of AI that is currently rewriting white-collar work is the warm-up.
The framing sets the discount rate on every other claim in the conversation. If you think we are 80% of the way through and the curve flattens from here, the deployed reasoning agents already in production are most of the prize. If you think we are at 10 to 15%, those deployed agents are bait, and the headroom is two orders of magnitude away.
Schmidt is in the second camp. He is also clear that recursive self-improvement, the thing that would actually put us on the steeper slope, is still an open scientific problem. “Real recursive self-improvement is the following,” he said. “Start now, learn everything, discover things, and tell me what you learned. That query doesn’t work yet.”

The San Francisco consensus
Schmidt has a phrase for what he calls “the San Francisco consensus.” Everyone he knows in the Bay Area believes this year is the year of agents. Within that year, the scaling of agent use grows at a rate limited only by electricity. Once recursive self-improvement lands, the system improves itself faster than humans can biologically keep up with, and that is the superintelligence moment. The consensus puts that moment two to three years out.
The proof point Schmidt cites is Claude Code. “Everyone I know in the Bay Area that’s doing software says it was 80/20, now it’s 20/80.” Software development inverted in months. The human went from doing the work and using the model as autocomplete to writing the spec and reading the results at breakfast.
He told the story of a young programmer he had just met. The kid writes the spec, writes an evaluation function, and turns it on at seven in the evening. He has dinner with his wife. He goes to sleep. The job finishes at four in the morning. Schmidt’s reaction: “This stuff would have taken me six months and ten programmers at Google to do the same thing. This poor guy’s sleeping.”
The implication Schmidt drew next is the one most teams are not ready to absorb. A small number of very large companies, plus a very large number of very small companies, because you do not need as many people. Programmers are not going away, but the ones who survive are the ones who can think in parallel orchestrators rather than line-by-line authors.
The 92-gigawatt wall
This is the part of the conversation that should make every CFO sit up. In his congressional testimony, Schmidt put America’s electricity shortfall through 2030 at 92 gigawatts. A nuclear plant is roughly 1.5 gigawatts. That is 60 nuclear plants America has not built and does not currently have permission to build.
The math gets uglier. A gigawatt of AI infrastructure runs about $50 billion of hardware, software, and data centers. 100 gigawatts is $5 trillion over five years. Data center buildout already accounts for 1% of US GDP growth, and the current estimate is that 10% of all American electricity will be flowing into data centers within a few years.
The standard “efficiency will save us” rebuttal, where better algorithms do more with less, runs straight into Jevons Paradox. “As the algorithms become more efficient, you don’t need less power. You need even more, because we discover new uses.” Every cost-per-token improvement Anthropic and OpenAI ship makes the demand curve steeper, not flatter.
Which is why data centers in space stopped being a meme and started being a board agenda item. Schmidt, who is a part owner of Relativity Space, declared the heat dissipation problem technically understood. The remaining question is the business case. When ground-based power becomes the binding constraint, effectively infinite solar input in orbit starts to pencil out.

What DeepMind actually bought Google
The best capital-allocation story in the conversation is one Schmidt has told before, but it lands differently in a 92-gigawatt context. Google bought DeepMind for $600 million. Everyone thought it was crazy. Then DeepMind paid for the entire acquisition by optimizing air conditioning in Google’s data centers. Before AlphaGo. Before AlphaFold. Just by running better cooling models.
Then AlphaGo happened. Schmidt was in Seoul watching the win-probability monitor climb. “It starts at 50/50. I go watch the screen and it goes to 51%. And then 52%. And then David, who is the architect, says, ‘Well, we just plan for it to get to infinity.’ Boom.”
Then Demis Hassabis took the same team and pointed it at protein folding. AlphaFold did in an hour what used to take a PhD student four years, roughly 300 million times more efficient at the science that underwrites every drug discovery program on the planet.
The lesson Schmidt drew is the one that matters for anyone allocating capital today. “You actually have to understand the game.” Pick domains with clean evaluation functions. Hand them to teams patient enough to be ignored. Redeploy those teams when the first problem is solved. That is the playbook that produced the AI Google is running now. Most companies do not have the patience for step three.

China is the competitor we are not allowed to lose to twice
Schmidt repeats a phrase deliberately. China is “the competitor, not the enemy.” He thinks the distinction matters, and he thinks America already lost it once.
“With respect to robotics, we somehow decided it was okay for them to dominate the electric vehicle industry. This was an error. Spend some time outside of this country in Chinese cars, trust me. They are real competitors.”
The robotics argument flows directly from the EV one. A robot is stepper motors plus a brain. The factories that build EVs already build the motors at scale, and the brutal Chinese competition culture sharpens the cost curve faster than a board-dinner-driven Western counterpart can. Watch Unitree’s dancing robot. That is the low end of the market closing.
Why is the US response vertically integrated at companies like Tesla and Figure? Because there is no vendor ecosystem. “I have no choice.” When you cannot buy stepper motors from a robotics supply chain, you build the supply chain. Schmidt is in the rocket business now and watching the same dynamic at Relativity Space. The majority of a rocket’s cost is high-skill human assembly that current robots cannot replicate, because the tolerances are physical and the judgement happens on the spot. That ceiling will fall, but not for a long time.

The counter-read
Schmidt’s caveats deserve more weight than the consensus gives them. He spent a week doing recursive self-improvement reviews. His verdict: “The scientists do not agree on the exact approach yet. There are tests in the lab that show it, but they show it in limited cases that are kind of demos.” That is the chair of an informal AI-governance group describing the foundational claim of the San Francisco consensus as not yet working.
The Chernobyl framing also lands harder than most coverage notes. “It may take such a tragedy, hopefully a small one, to awaken the world.” Schmidt is not endorsing the idea that a modest AI catastrophe is a feature. He is describing what he thinks it will take to get China and the United States in the same room. That is a different and more uncomfortable claim than most safety discourse will admit. If your alignment plan requires a tragedy, your alignment plan is a hope, not a plan.
Schmidt’s actual ask, the one buried under the consensus headlines, is for historians, ethicists, governance experts, and political scientists to be in the room before ASI lands. Not instead of engineers. With them. The system America builds should reflect American values, by which Schmidt means freedom of speech and association, the things a fourteen-year-old American learns in elementary school. The concrete near-term test is the one he keeps returning to. “It is not okay for thirteen-year-olds to be committing suicide because of an LLM. It’s just not okay.” If a society cannot hold that line, it does not get to hold any of the harder ones.
Sources
Eric Schmidt’s Abundance360 talk has the structure of a sermon. The numbers are precise. The framings are confident. The asks are concrete. Read it twice and a different shape appears: every load-bearing claim in the talk is either a guess Schmidt himself qualifies, an analogy that papers over a real disanalogy, or an argument that sounds like a strategy but is actually a confession.
That doesn’t make Schmidt wrong. It makes the consensus he is describing softer than the consensus thinks it is.

The 10 to 15 percent number has no denominator
“We’re 10 or 15% into the impacts of this.” It is the headline number, and on closer inspection it is not a measurement. There is no benchmark, no methodology, no defined endpoint that could ever make the figure falsifiable. Schmidt has authority, so the number lands as fact. It is intuition wearing a percentage sign.
The same problem haunts the San Francisco consensus more broadly. Schmidt frames it as the read of “everyone in San Francisco.” That is also the read of the people who own equity in companies that need it to be true to justify their valuations. The consensus is a community of believers reporting back to each other. That is not nothing, but it is not evidence either.
Claude Code’s 80/20 inversion is real. The subjective experience of senior programmers seeing their workflow change in months is real. Whether either fact tells you anything about the trajectory of recursive self-improvement is the open question, and Schmidt himself is the one who closes the door on the optimistic answer.
RSI is still demos, by Schmidt’s own admission
This is the most underweighted line in the entire conversation. Schmidt spent the previous week reviewing recursive self-improvement work across the labs. His verdict, in his own words: “The scientists do not agree on the exact approach yet. There are tests in the lab that show it, but they show it in limited cases that are kind of demos.”
Real RSI, the version Schmidt defines, is the query “start now, learn everything, discover things, and tell me what you learned.” That query, he says plainly, “doesn’t work yet.” The keystone claim of the San Francisco consensus is that this query starts working in two to three years, and the only person at Abundance360 with both the access and the skepticism to say so is reporting that the science is not there.
The two-to-three-year timeline exists because the people building the systems have an incentive to promise it. The honest answer Schmidt gives, when pressed on the actual lab results, is “demos.” That is a much smaller claim than the consensus headline.
The 92-gigawatt wall might be rationalizing the spend, not predicting it
The electricity number is real. The Jevons Paradox argument is also real, but it is being asked to do too much work. Jevons describes a tendency. It is not a theorem. Coal consumption rose with steam engine efficiency in nineteenth-century Britain because there was unmet industrial demand on the other side of the equation. Whether AI compute has the same kind of unmet underlying demand at $5 trillion of cumulative spend is exactly the question, and Jevons cannot answer it.
The risk the consensus is not pricing is that the demand curve plateaus before the supply curve does. If 80% of the value of agents lands in the next two years and the marginal returns to additional compute slow after that, the $50-billion-per-gigawatt math turns into a balance sheet problem instead of a national security one. That is not a prediction. It is a scenario the consensus cannot rule out, and it is the scenario that destroys the most capital fastest.

DeepMind is a survivorship-bias story
Schmidt’s DeepMind story is great. The acquisition paid for itself in cooling. AlphaGo. AlphaFold. The same team carried from Go to protein folding because Demis had always wanted to. It is told as a lesson in patient capital allocation, and the lesson is real.
It is also, statistically, a survivorship-bias story. Google made many billion-dollar bets on AI over a decade. DeepMind worked. Most of the others did not. The lesson “be patient and let smart people work on big problems” is true and useful, but it is not a repeatable playbook. It is a description of one outcome on the right side of a long distribution, told by the CEO who happened to be sitting in the seat when it landed.
The harder lesson, the one Schmidt does not draw, is that picking the right team and the right problem is mostly luck dressed up in narrative. Anyone who watched the AlphaGo win-probability climb from 50 to infinity in Seoul and concluded they could replicate the playbook by buying a research team and waiting is in for a long, expensive learning experience.

The robotics analogy is doing more work than the data supports
Schmidt’s argument is that robotics will repeat the EV story, with the same Chinese supply chain advantage. The mechanics sound clean. A robot is stepper motors plus a brain. The motors come from the EV factories. China wins on cost. America is on track to whiff.
The disanalogy is the brain. EVs are a manufacturing problem. Humanoid robotics is a manufacturing problem and a control problem and a data problem and a safety problem. Watch the Unitree dancing robot if you want to see a charming demo. Watch it try to fold laundry in an unfamiliar kitchen if you want to see how far the brain still has to go. China’s supply chain advantage is real for the chassis. The argument that it carries through to general-purpose physical AI is exactly the kind of analogy that sounds airtight in a conference talk and then doesn’t survive contact with the actual technical challenge.

A strategy that requires a tragedy is not a strategy
Schmidt’s most uncomfortable line is the Chernobyl one. “It may take such a tragedy, hopefully a small one, to awaken the world.” He is describing, not endorsing. He is also being honest in a way most safety discourse will not allow. If the only path to international coordination on AI runs through a modest catastrophe, the field does not have a coordination plan. It has a hope that the catastrophe is small enough.
The same softness shows up in Schmidt’s closing ask. He wants historians, ethicists, and governance experts in the room before ASI lands, and he wants the system to reflect “American values.” Both are reasonable. Neither is operationally precise. American values are not a single coherent thing. They are a contested political battleground that changes every four years. Betting alignment on a phrase that cannot survive a contested election cycle is the kind of plan that looks visionary in a keynote and incoherent in a memo.
The consensus may turn out to be right. The numbers may compound. The agents may take over by next year. Schmidt is the most credible person making the argument, and the case he makes is the strongest version of it. It is also a case built on an unmeasured percentage, a recursive belief loop, an unproven scientific claim, two analogies stretched past their breaking points, and a coordination plan that requires a tragedy. That is not a reason to dismiss the consensus. It is a reason to discount it by exactly as much as the man making it would, in private, if you asked him.
The wall is 92 gigawatts tall and America hasn’t started building. Every other AI debate is a distraction from this number.
Eric Schmidt told a Congress that doesn’t talk about AI more than 1% of the time that the United States is short 60 nuclear plants of electricity through 2030. He testified to it, ran the numbers in public, and started a data center company so he could see the math from inside. The math is unforgiving. The teams who internalize it now will own the next decade. The teams still arguing about model evaluations will be customers.

We are 10 to 15 percent in and the rest is the prize
Schmidt put the number on the record at Abundance360. “We’re 10 or 15% into the impacts of this.” The man who watched the search index, the YouTube acquisition, and the entire transformer revolution from inside Google is telling you the version of AI doing your code review tonight is the warm-up.
The San Francisco consensus is the read of people who have shipped frontier systems for a living. This year is the year of agents. Next year the agents are doing their own AI research. The cap on the curve is electricity, not ideas. The labs Schmidt works with have a million researcher-agents penciled in for the moment the power is available.
The proof Schmidt cites is Claude Code. “Everyone I know in the Bay Area that’s doing software says it was 80/20, now it’s 20/80.” That ratio inverted in months. He told the story of a young programmer who writes a spec, writes an evaluation function, turns it on at seven in the evening, and reads the results at breakfast. “This stuff would have taken me six months and ten programmers at Google,” Schmidt said. “This poor guy’s sleeping.”
If your engineering org is still arguing about whether AI is real, you are already behind. You are not behind by a quarter. You are behind by an entire generation of how software gets built.
The 92-gigawatt math is the only number that matters
Look at the numbers Schmidt put in front of Congress. 92 gigawatts of shortfall through 2030. A nuclear plant is 1.5 gigawatts. The equivalent of 60 plants America cannot permit, build, or finance under current rules. A single gigawatt of AI infrastructure costs roughly $50 billion. 100 gigawatts is $5 trillion of capital over five years. Data center buildout already accounts for 1% of US GDP growth. 10% of all US electricity will be flowing into data centers within a few years.
The Jevons Paradox argument settles every “but algorithms will get more efficient” rebuttal in one sentence. “As the algorithms become more efficient, you don’t need less power. You need even more, because we discover new uses.” Anthropic and OpenAI shipping a 100x cost-per-token reduction is the reason demand is going up, not down. Every efficiency win pulls more workloads onto the curve.
This is why data centers in space stopped sounding insane and started getting built. Schmidt, who is a part owner of Relativity Space, declared the heat dissipation problem technically understood. The remaining question is the business case, and the business case writes itself the second ground-based gigawatts run out.

DeepMind is the playbook America stopped running
Google bought DeepMind for $600 million. Everyone called it crazy. The acquisition paid for itself before AlphaGo by optimizing data center cooling. Then AlphaGo beat the best Go player in the world. Then the same team pivoted to protein folding and AlphaFold did in an hour what used to take a PhD student four years. Roughly 300 million times more efficient at the science that powers every drug discovery program on the planet.
That is the playbook. Pick domains with clean evaluation functions. Hand them to teams patient enough to be ignored. Redeploy the team when the first problem is solved. Schmidt watched the win-probability monitor climb in Seoul while the architect said, “We just plan for it to get to infinity.” That is what conviction looks like in capital allocation, and almost no Western company outside of a handful of frontier labs has the patience to run it.

China is winning the robotics race we already lost once
Schmidt is blunt about the EV mistake. “We somehow decided it was okay for them to dominate the electric vehicle industry. This was an error.” He thinks robotics is the second swing at the same pitch and America is on track to whiff again.
The mechanism is direct. A robot is stepper motors plus a brain. The factories that build EVs already build the motors at scale. Chinese competition culture sharpens the cost curve faster than any board-dinner-driven Western counterpart can. Watch Unitree’s dancing robot. That is the low end of the market closing this quarter.
The American counter is vertical integration at Tesla and Figure. Schmidt is doing the same thing at Relativity Space because the supply chain doesn’t exist. “I have no choice.” Build the supply chain, win the cost curve, or watch China take humanoid robotics the way they took batteries and EVs. There is no third option.

Schmidt’s actual ask is concrete and overdue. Build the gigawatts. Permit the nuclear plants. Teach prompt engineering to every freshman starting in September. Put historians, ethicists, and governance experts in the room with the engineers building ASI. Hold the line on the things that should not be negotiable, starting with thirteen-year-olds and LLMs. The teams and governments that move on these inside the next twelve months will compound. Everyone else will be working for them.
Comments
Loading comments…
Leave a comment