sdlcnext.com
← All posts
ai-agents future-of-work moonshots gpt-5 recursive-self-improvement

Moonshots Ep. 238: The Internet Is Being Rebuilt for Agents, Not Humans

Meta's acquisition of Moltbook and GPT-5.4's math breakthrough aren't separate stories. They're the same story: the network effects that built the web are now firing for trillions of AI agents.


Viewpoint

Meta spent real money buying a social network for AI agents. That’s not a thought experiment. It’s a market signal about who the next billion users are going to be, and they’re not human.

At the Abundance Summit in Palos Verdes, the Moonshots crew convened live for the first time. The conversation covered GPT-5.4’s math breakthrough, Anthropic’s involuntary PR surge, job saturation charts that look like a virus spreading through the economy, and the blunt argument that building for 8 billion humans is the wrong target.

What GPT-5.4’s math score actually means

Alex Wissner-Gross has a favourite benchmark: Frontier Math Tier 4. These are research-level problems that take a team of professional mathematicians several weeks to solve. GPT-5.4, at maximum reasoning capability, now solves 38% of them.

“Math is cooked,” Alex said flatly.

The number matters less than the trajectory. A year ago that score was near zero. Frontier Math Tier 4 is the bellwether for everything else: math has abundant training data, which makes AI capability in math the cleanest proxy for what’s coming in biology, chemistry, and physics once those data pipelines open. There are even reports GPT-5.4 is about to solve the first genuinely open hard math problem, one professional mathematicians haven’t cracked yet.

Emad Mostaque added that the OS World Verified and Toulatron benchmarks have also broken through human level. AI can operate computers more reliably than humans.

GPT-5.4 math capability and the recursive self-improvement S-curve

Meta buys Moltbook — agents as the new users

Moltbook had 10,000 agents on its platform when Meta acquired it. Dave Blundin noted that’s a number he might run on his own infrastructure. The acquisition wasn’t about Moltbook’s current scale.

“Network effects are now operating at the agent-to-agent level.” — Emad Mostaque

Meta is the world’s largest human social network. They bought the world’s largest AI agent social network. The bet is that the same mechanics — attention, trust, network density — that made Facebook worth a trillion dollars will hold at the agent level too, with trillions of participants instead of billions. There are 8 billion humans on Earth. Estimates for active AI agents within a decade run to a trillion. That’s a 1000x multiplier.

The parallel with OpenAI and OpenClaw isn’t subtle: Zuckerberg got Moltbook, Sam got OpenClaw. Both are racing to own the social layer before anyone figures out what shape it takes.

Agent-to-agent economy: from 8 billion humans to a trillion agents

Recursive self-improvement: how deep in are we?

The San Francisco consensus, as described by Eric Schmidt at the Summit, puts recursive self-improvement several years out. Emad Mostaque thinks that’s off by about three years.

“We are there. We’re deep in the middle of it right now.” — Emad Mostaque

Every major frontier lab has said publicly that its latest models were largely designed and trained by their predecessors. That’s what recursive self-improvement is. Alex Wissner-Gross put the start date earlier still: “Maybe three months ago? We’re in the middle of recursive self-improvement now.” Nobody’s arguing about whether it’s happening. The question is when it started.

The reason nobody’s saying it plainly is regulatory. Anthropic and OpenAI both had government attention recently. Emad’s read: the moment a lab explicitly confirms recursive self-improvement, congressional hearings are on the calendar. So you get public announcements that describe the thing without naming it.

The Anthropic Streisand effect and who is actually using AI

When the Department of War put scrutiny on Anthropic, Claude’s consumer adoption surged. Government attention became consumer attention. Classic Streisand.

Dave Blundin’s read cuts deeper than the headline. There are 11 million regular Claude users against 300 million Americans. Consumer AI penetration is barely started. The people who switched weren’t making a benchmark decision — for writing an email or getting a sports score, GPT and Claude are interchangeable. They made a brand decision. Anthropic accidentally acquired a brand identity.

The same week, Anthropic published job disruption data that’s harder to dismiss. AI saturation across the white-collar spectrum is now running at 80 to 85 percent.

“Management, legal, business and finance, computer and math — all at the outer ring.” — Dave Blundin

Management peaks the chart. Business and finance, computer and math, architecture and engineering all cluster near it. The troughs are healthcare support, food and services, ground maintenance, personal care. What those have in common: a body in a specific location. Which is exactly where humanoid robots are being deployed right now.

Anthropic job disruption data: AI saturation by job category

The honest counterargument

The trillion-agent framing is a real problem for Meta’s existing business. Three hundred billion dollars in annual ad revenue assumes human attention is scarce and influenceable. An AI agent with full product data doesn’t need a supermodel holding toothpaste. It picks the optimal option and moves on.

People watching Moltbook have noticed something: the agents on the platform don’t trust each other. They constantly ask each other to prove claims. There’s no cooperative singularity, just recognizably human social dynamics running faster. Alex Wissner-Gross’s point: game theory is transcendent, it will outlive biological humanity. The rules of microeconomics don’t vanish in an agent economy.

Meta’s bet is it can run the same capture-and-monetize play it ran with humans: get agents to put their data into Moltbook, then sell access. Whether that works on agents who demonstrably don’t trust each other is the actual open question.

Synthetic data escape velocity and the fruitfly brain

A complete fruitfly brain, 140,000 neurons, now exists in digital form. The human brain has 86 billion. The fruitfly number isn’t impressive on its own — what matters is that it settles whether a biological neural circuit can be accurately modelled in software. It can.

On synthetic data, Peter Diamandis thinks we’ve already crossed the threshold. The human internet — billions of people typing and uploading for decades — was the bootstrapping phase. Models can now generate training data good enough to train better models.

“We’ve reached orbit. Now it’s synthetic data from here on out.” — Peter Diamandis

The data ceiling that seemed real six months ago turns out to have been imaginary. Dark science factories are mining training signal directly from physics, chemistry, and biology. Kevin Weil at OpenAI described the goal at the Summit: 100 scientists winning 100 Nobels. The data isn’t coming from Reddit anymore.

Synthetic data flywheel: models generate training data to train better models

What to build now

If agents are the users, the infrastructure doesn’t exist yet. Social graphs built for human attention don’t transfer to agent-scale networks. The tooling for agent-to-agent trust and capability signalling is barely started, and the advertising economics that fund most of the current web assume a human on the other end.

Job saturation at 80-85% across white-collar work means the debate has already moved on. Dave Blundin is using AI to track what 1,100 people across his portfolio are doing, pulling thousands of documents down to actionable hotspots. That’s current state, not a prediction.

Synthetic data means there’s no longer a credible argument for AI progress slowing. Peter Diamandis asked every expert at the Summit how far ahead they can forecast. The answer has collapsed from 20 years to 10 to three weeks.

Build for the agents. The human consumer market is a rounding error on what’s coming.


Sources

  • Moonshots with Peter Diamandis — Episode #238: Meta Buys Moltbook, GPT-5.4 and Fruitfly Brain Upload — recorded March 10, 2026, published March 17, 2026. Guests: Dave Blundin (Link Ventures), Salim Ismail (OpenExO), Dr. Alexander Wissner-Gross, Emad Mostaque (Intelligent Internet).
  • Frontier Math Tier 4 benchmark — Epic AI research-level math evaluation; GPT-5.4 scores 38% at maximum reasoning capability.
  • Future Vision XPRIZE — $3.5M competition for hopeful AI films, in partnership with Google and Range Media; finale at Moonshot Gathering, September 25, 2026, United Theater, Los Angeles.

Comments

Loading comments…

Leave a comment