A 20-year-old Texan threw a Molotov cocktail at Sam Altman’s house on April 10. Three days later someone fired shots at his Russian Hill property. The same week Maine passed the first statewide data center moratorium in U.S. history, and Festus, Missouri voters fired four city council members for approving a $6 billion data center. None of it had anything to do with capability.
That is the throughline of Moonshots Episode 248. Capability shipped on time. Anthropic dropped Opus 4.7 the morning of the recording, with the knobs deprecated in favour of natural-language prompts and a 3x bump in input image size. Google’s TurboQuant landed a 6x memory reduction and 8x context boost, then got reverse-engineered into open source within a week by developers pointing Claude Code at the paper. Sid Sijbrandij dumped 25 terabytes of his own body data into ChatGPT, found an off-label drug that 19 oncologists had missed, and has been relapse-free since 2025. Capability is fine. What slipped is permission.
Capability is no longer the constraint
Opus 4.7 is, in Alex Wissner-Gross’s verdict, “a solid point release of Opus.” The interesting part is what got removed. Temperature is gone. Reasoning-token budgets are gone. The migration notes between 4.6 and 4.7 read like a deletion log: dials replaced with prompts, parameters replaced with natural language. If you want determinism, the documentation tells you to forget about it.
The capability curve stopped being a chip story. Anthropic’s new paper on weaker models supervising stronger ones works as a tower-of-alignment proxy for human oversight of superintelligence. TurboQuant takes the KV cache to one bit per parameter, and Dave Blundin and Alex are arguing about whether ternary at 1.58 bits is the optimal architecture or just the next stop on the way to a sub-binary numerical paradigm. Eight times the effective brain memory thinking about a single problem in a single moment. None of that hit a chip wall.

The actual rate-limiter is permission
Stanford’s 2026 AI Index pulled the public data into one place: SWE benchmark from 60% to 97%, GenAI at 53% global adoption in three years, model transparency score down from 58 to 40, AI incidents up from 233 to 362, public optimism stuck at 23% against expert optimism at 73%. Alex’s hot take was “too little, too late, too frequent” because annual reports cannot capture a daily-cadence revolution. The numbers pointing at a permission problem are the only ones still moving.
Maine moved first: an 18-month statewide moratorium on new data centers, time for “the task force to study impact,” which in practice is time for everyone else to lock in their queue position. Between March and June of 2025, $98 billion of data center projects got blocked or delayed by community opposition. Eleven states have active legislation drafted for moratoriums. New Hampshire went the other way and passed an AI right to compute, which is how you can tell this is now a 50-state policy fight.
Alex’s perverse read: thank the data-center-ban states. They are accelerating the orbital Dyson swarm. If the regulatory regime keeps pushing AI compute out of suburban Festus and into low Earth orbit, the U.S. wins by default on the sun-synchronous-orbit timeline. That is only true if the orbital build-out gets through. If it doesn’t, the bans are exactly what they look like, and the win goes to whoever holds the spectrum.
The youth jobs freeze nobody can vote against
U.S. software developer employment in the 22-to-25 age bracket has dropped nearly 20% since 2024. Older developers, age 30 and up, gained headcount over the same period. Customer service, legal support, administrative roles all show the same shape on Stanford’s chart: a jagged downward line for early career, a gentle upward line for everyone else.
The dangerous part is the mechanism. Companies are not firing young workers. They are not hiring them in the first place. There is no unemployment spike, no layoff event, no policy trigger. Standard labour market monitoring picks up zero signal. As Peter put it, the drop is politically invisible.
Salim’s framing was Arab Spring. Young men, no job, no house, no family, an angry cohort with nothing to lose. Dave’s MIT story landed harder. Three Princeton chip-design seniors, all with offers (NVIDIA, banking, a grad school slot), every offer the worst choice they could possibly make in this moment because the brainpower is a commodity within two years post-ASI. The advice was: stick together, start a company, surf the closing window before it shuts.

Coase’s law dies on Jack Dorsey’s whiteboard
Jack Dorsey wants 6,000 direct reports at Block. Three roles only: IC/operator, manager, leader. The org chart goes from a hierarchy of supervision to what Salim calls a network of intent, with AI as the translation layer.
Alex’s read: 6,000 direct reports means zero direct reports. The Dunbar limit is around 150. Anything past that and the AI is the actual CEO, with Jack as a figurehead training it with every interaction he oversees. Salim’s running organizational singularity paper is documenting the shape. Machine mediation collapses management bandwidth. Transaction costs leave the firm faster than they come in. Jack Welch’s 2000 annual report line (“the minute the metabolism of your company is slower than the outside world, you’re dead”) gets cited as if it were 2026 commentary.
The Macrohard side of the same argument is Tesla and XAI building a system that observes employee keyboard and mouse input across a target company and trains agents to replace the operations wholesale. You install Macrohard, it watches, it absorbs. The permission problem is whether the legal and labour structures that built mid-market firms can survive a buyer that prices the firm by the scale of its tacit knowledge.

Personalized medicine is a permission problem now
Sid Sijbrandij founded GitLab. He got told stage-four cancer would kill him. He stopped being a patient and became a founder. He built a private team of oncologists, fed 25 terabytes of his own body data into ChatGPT, and the model surfaced an off-label drug approved for a different cancer that no one had tried on his type. His team built 19 custom DNA vaccines, each one tuned to attack only his cancer cells. Relapse-free since 2025.
The science is solved. The bottleneck is FDA structure. The agency is engineered for population medicine, not for n=1. Under current leadership it has moved from two clinical trials to one, started accepting Bayesian statistics in some cases, and David Fajgenbaum’s repurposing project at Penn is finding cures by running already-approved drugs against new diseases. None of that scales the way Sijbrandij’s case scales. His n=1 was made possible by a $14 billion personal exit. The permission stack from FDA approval down to insurance reimbursement is what determines whether the next 10 billion get the same shot.
Salim’s “How dare you?” Greta moment was aimed at Pause AI. If AI is the new bottleneck on cures, then advocating for slowing it down is a position that has to defend itself against every individual outcome it would have prevented. Whether that calculus holds at population scale is a separate argument. At n=1 it is not close.
The strategic moves are about licence, not silicon
Amazon paid $11.57 billion for Globalstar. The headline was 25 satellites against Starlink’s 10,000. The actual asset was 25.225 megahertz of ITU-cleared 2.4 gigahertz spectrum across 120 countries. Phone-direct connectivity that does not get blocked by your fingers, because the wavelength is short enough to pass through a hand.
The reason Amazon paid premium was that the spectrum is no longer available for anyone else. Apple wanted a second satellite vendor for the same Verizon-versus-T-Mobile reason it wants a second supplier for everything. Verizon and T-Mobile are at the terrestrial layer. The new fight is Starlink versus Amazon-via-Globalstar at the orbital layer, and Apple’s iPhone and Apple Watch are the wedge into both.
Starlink’s V3 deploys at 1 terabit per second downlink per satellite, 60 terabits added per Starship launch. The 40,000-satellite plan needs three Starship launches per week over three years. That is a logistics problem, not a chip problem. The 120,000-satellite V4 plan after that is still a logistics problem. The permission stack here is FCC, ITU, and the launch licence cadence at Boca Chica.

The honest counterargument
The capability case is still real. Opus 4.7’s bio benchmark progress is non-trivial. TurboQuant’s open-source reverse-engineering means the hardware moat shrinks every quarter. The alignment-supervision loop is pushing on the most important capability problem there is. None of that goes away because Maine passed a moratorium.
The harder argument is that permission frictions are mostly protective. The Pause AI Discord server is downstream of a real public concern. The 23% optimism number is a leading indicator, not noise. Stanford is right to track AI incidents climbing, even if the cadence is wrong. If the FDA collapsed from two trials to zero overnight, the upside is fewer Sijbrandij-grade individual outcomes lost, and the downside is industrial-scale unsafe drugs reaching market faster than the surveillance system can detect harm. The panel’s “go entrepreneurial, surf the singularity” framing under-weights what failure mode looks like at scale.
Both can be true. Capability is not the constraint anymore, and permission frictions are mostly protective. The optimization problem is which permissions to relax and in what order. That problem is not technical and is not getting solved by a frontier lab.
What 248 actually argued is that the AI race is now an institutions race. Spectrum, regulatory regime, organizational shape, and social licence are the variables that matter. The team that wins the next two years is the team that gets a working permission stack first.
Sources
- Moonshots with Peter Diamandis, Episode #248: Sam Altman’s Attack, Amazon vs. Starlink, and What Opus 4.7 Actually Means. Recorded April 16, 2026, episode date April 18, 2026. Hosts: Peter Diamandis, Salim Ismail, Dave Blundin (Link Ventures), Dr. Alexander Wissner-Gross.
- Stanford 2026 AI Index, 9th edition, led by Yolanda Gil and Raymond Perrault, Erik Brynjolfsson directing.
- Anthropic Opus 4.7 release notes, April 16, 2026.
- Google TurboQuant paper on attention-compute quantisation.
The throughline of Episode 248 is that permission has replaced capability as the binding constraint on AI. It is a clean story. It is also a story the panel has incentive to tell. Stay sharp.
The first place to push is the capability case itself. Alex Wissner-Gross’s own verdict on Opus 4.7 was “moderately interesting,” “a solid point release,” “no Mythos.” He compared it to Mythos and felt cheated. That is not a model with capability slack to spare. That is a model where the gap between hyped expectation and delivered improvement is widening, and the panel is hand-waving past it because the throughline of “capability shipped” is more usable than the truth, which is that capability is shipping with diminishing returns at an increasing cost per token.
TurboQuant is a paper, not a deployment
Google’s TurboQuant landed a 6x memory reduction in a paper without source code. Within a week, internet developers pointed Claude Code at the paper and produced what Alex called “a better version of their quantisation approach that’s now publicly available.” Calling that reverse-engineered into open source is generous. Calling it production-ready is fiction. The Financial Times stories tracking memory supplier stock prices going up in the same week are a market saying it does not believe TurboQuant moves enough volume to deflate hardware demand.
This is the second time in 18 months that an algorithmic compression paper got described as the next DeepSeek moment, and the second time the actual market response has been to keep buying memory. The pattern says capability gains at the algorithmic layer are real but not as economy-deflating as the panel narrative claims. If the binding constraint really is permission, you would expect capability investment to be flat. It is not.

“Permission frictions” is a label, not a finding
The 23% public optimism number is the panel’s exhibit A for the public being wrong. That framing is exactly the kind of expert-versus-public asymmetry that has historically been a marker of bad outcomes, not bad public opinion. The same expert class confidently predicted social media would democratise discourse, that algorithmic content recommendation would not produce political polarization, and that gig-economy classification would benefit the workers being classified. Public skepticism of AI is downstream of those track records.
Stanford’s transparency index dropping from 58 to 40 is treated by Alex as a reasonable response to proliferation risk. Possibly. It is also the case that less transparent models are easier to charge more for, harder to audit, and harder to compete with. The frame that treats reduced transparency as protective conveniently aligns with the commercial interests of every frontier lab that just stopped publishing.
The Molotov cocktail at Sam Altman’s house is a crime and a tragedy. The 23% optimism number is a survey response. They are not the same thing, and treating both as instances of “permission frictions” is a category error.
The youth dev freeze data has a floor
The Stanford chart showing 22-to-25 software developer employment down ~20% since 2024 ends in September 2025. Alex flagged this on the pod. Even he caveated it: “I’ve seen studies even over the past two to three weeks that suggests that this trend has reversed itself in the past few months.”
Three things are worth tracking before declaring the youth jobs market has been politically buried by AI. First, the broader macro labour market is in a post-ZIRP correction. Tech hiring is down across age brackets. Pulling out the 22-25 cohort and attributing the drop entirely to AI requires controlling for a recession that is not in the chart. Second, the narrative effect is real: “AI replaces juniors” repeated for 18 months becomes a hiring policy whether or not the underlying productivity case holds. Third, hiring freezes are not the same as permanent labour displacement. A two-year window of locked junior positions is a delay, not a structural break.

Sijbrandij is the most survivorship-biased data point on the planet
Sid Sijbrandij founded a company that exited at $14 billion. He had stage-four cancer. He stopped being a patient and became a founder of his own oncology team. He fed 25 terabytes of his own body data into ChatGPT. His team built 19 custom DNA vaccines. He is alive.
That is one data point. The number of wealthy patients who tried analogous AI-augmented n=1 approaches and died is, by definition, not in the press cycle. The number who tried and recovered for unrelated reasons is also not. Treating Sijbrandij’s outcome as proof that the FDA is the bottleneck on personalized medicine ignores that clinical trial structure exists precisely to filter survivorship bias out of treatment efficacy claims. The agency does not get to be the problem here. It is the only mechanism that makes “personalized medicine works” a defensible claim instead of an anecdote.
The right read of Sijbrandij is that AI plus capital plus a private research team can sometimes find an existing approved drug useful for a different indication. That is exactly what David Fajgenbaum’s drug repurposing project at Penn is doing systematically. The system already exists. It is not blocked by permission. It is constrained by capital allocation.
Coase’s law dying is one CEO’s opinion
Jack Dorsey’s 6,000 direct reports is a structural claim from one CEO at one company. Block has not yet shipped the org chart. The “AI as the management layer” framing is the same framing that produced WeWork’s “elevate the world’s consciousness” prospectus. Sometimes the framing is right and sometimes it is the founder reading their own headlines.
The Macrohard story (Tesla / XAI installs that observe employee keyboard input and absorb the company) is a marketing claim, not a deployed product. The legal and labour structures around tacit knowledge transfer have been tested in court before, repeatedly, and the buyer side does not always win. Treating “you install Macrohard, it absorbs” as inevitable ignores everything M&A lawyers know about non-compete enforceability and acquired-talent retention.

The orbital pivot is its own permission problem
Amazon’s $11.57 billion Globalstar buy is real and the spectrum thesis is correct. The framing that orbital is the unblocked path that terrestrial bans push you toward is half right. The other half is that orbital deployment requires FCC, ITU, and Boca Chica launch licence approval at a cadence that has never been sustained. Three Starship launches per week for three years is a lot of regulatory clearances, not a logistics problem solved.
If the panel’s thesis is that permission is now the binding constraint, then orbital is also a permission stack. It is a different permission stack, with different regulators, but it is not “the unblocked alternative to terrestrial frictions.” Treating it as such is the kind of frame that lands a strategy in court two years later.

The closing read
Permission is a constraint. So is capability. So is capital. The panel’s “institutions race” framing makes for a clean throughline, and the operational advice that follows (lobby Maine, ship n=1 medicine, flatten the org) is mostly fine on its own terms. The framing is also a rebrand of the same defence-tech narrative the U.S. has been running since Vannevar Bush. None of it is wrong; some of it is older than it sounds.
The hard question the panel did not fully answer is which permission frictions are protective and which are protectionist. The 23% public optimism number, the FDA’s two-trials-to-one progression, the Maine moratorium, and the Stanford transparency drop are not the same kind of friction. Lumping them together makes for usable narrative. It does not make for a usable strategy.
Capability is over as a thesis. That ship sailed the morning Anthropic dropped Opus 4.7, the morning Google’s TurboQuant got reverse-engineered into open source by developers pointing Claude Code at the paper, the morning Sid Sijbrandij went public with the AI-designed regimen that put his stage-four cancer in remission. If you are still building a strategy around “are the models good enough yet,” your strategy is two years out of date. The fight that matters now is permission, and the teams that are not fighting it are already losing.
That is the actual content of Moonshots Episode 248. Read between the panel’s news cycle and the throughline is unmistakable. Spectrum is scarce. Social licence is scarce. Regulatory queue position is scarce. The org chart is collapsing. The youth dev hiring market is locked shut without a single layoff. None of these are capability problems, and none of them are getting solved by a frontier lab.
The capability case is over
Opus 4.7’s migration notes read like a deletion log. Temperature gone. Reasoning-token budgets gone. Determinism explicitly off the table. The dials and hyperparameters that used to define power-user model usage are deprecated in favour of natural-language prompts. The era of model knobs is over. The era of agentic instruction is here.
TurboQuant takes the KV cache to one bit per parameter. The KV cache is what holds the working context of a transformer. One bit. Eight times the effective brain memory thinking about a single problem, on consumer hardware, available for download within a week of a paper that did not ship source. Anthropic’s weaker-supervises-stronger paper is a working alignment loop with humans as the weaker supervisors. Capability ships. Stop arguing about it.

Permission is where the time and money go now
Maine’s 18-month data-center moratorium is not an outlier. Eleven states have legislation drafted. Festus, Missouri voters fired four city council members on election day after they approved a $6 billion data center on 360 acres. $98 billion of data center projects got blocked or delayed in Q1 2025. The pattern is accelerating, and the projects that get through are the ones that pre-staged the social licence at the time they bought the land.
Which means you do not buy the land first anymore. You buy the elected officials’ silence first, then the land. New Hampshire passed an AI right to compute and just won the next decade of U.S. data-center capacity. Maine cancelled its position. That is a capital allocation variable, not a regulatory one. If your team treats Maine and New Hampshire as identical addresses on a build-out spreadsheet, your build-out is over before it starts.
Alex’s perverse “thank-you” to the data-center-ban states is operational, not paradoxical. If the regulatory regime keeps pushing AI compute to low Earth orbit, the U.S. wins by default on the SSO timeline. Whoever ships the orbital build-out first locks in the next two decades of compute geography. That is a launch licence problem, not a chip problem.
The youth jobs collapse is your next hiring market
The U.S. 22-to-25 software developer cohort is down nearly 20% in employment since 2024 with no layoff event. Companies are not firing them. They are not hiring them. Standard labour market monitoring picks up zero signal because there is no termination data. Parents do not know. Recruiters do not know. Voters do not know.
This is your next hiring market. Three Princeton chip-design seniors got offers at NVIDIA, banking, and a grad school slot. The advice from Dave’s MIT pitch was to take none of them, stick together, start a company. Elite cognitive work is a commodity within two years post-ASI. The Princeton seniors who start a company now own the thing they would have built for someone else’s equity. The ones who took the offer do not.
If your firm is hiring for senior AI roles only, you are paying market clearing for the most expensive AI talent on the planet, and ignoring the cohort that is starving for technical work. Pay attention to where the talent is. The next Demis Hassabis is currently 23 and unemployed.

Coase’s law dies, your org chart with it
Jack Dorsey’s 6,000 direct reports at Block is a serious operational claim, not a stunt. Three roles only: IC/operator, manager, leader. The middle layer goes to AI. If your firm has more than three role categories on its HR system right now, your org chart is over budget by about a year.
Alex’s read is the right one. 6,000 direct reports means zero direct reports, which means the AI is the actual CEO. Jack Welch’s 2000 line (“the minute the metabolism of your company is slower than the outside world, you’re dead”) is now an operating manual. Coase’s law dies because transaction costs leave the firm faster than they come in. The mid-market construction firm running EXO’s profit-sharing model with AI on the operations layer is going to outpace the same firm running a five-tier hierarchy. Macrohard is going to absorb the second one.
You do not get to wait for proof of this. The firms that ran the experiment six months ago are running the productivity gap right now.

Personalized medicine gets unblocked next
Sijbrandij’s relapse-free outcome is the proof of concept. 25 terabytes into ChatGPT, an off-label drug from the AI, 19 custom DNA vaccines from his team, no recurrence since 2025. The science is finished. The remaining bottleneck is FDA structure, and the FDA has already started moving (two trials to one, Bayesian statistics in some cases, repurposing approvals via the Fajgenbaum project at Penn).
The next two years are the n=1 medicine pipeline being built behind the regulatory front line. The team that ships an AI-designed personalized vaccine pipeline at insurance-reimbursable scale wins the next decade of biotech. The team waiting for the FDA to collapse two trials to zero is going to be late. Salim’s “How dare you” framing aimed at Pause AI is not rhetorical. Every n=1 outcome that does not ship is a person who dies.
The strategic moves to copy
Amazon paid $11.57 billion for Globalstar’s 25 satellites. The actual purchase was 25.225 MHz of ITU-cleared 2.4 GHz spectrum in 120 countries. The spectrum is gone from the open market. Apple now has a second satellite vendor against Starlink. Whichever team treats spectrum as the new oil is right.
Starlink V3 ships 1 terabit per satellite, 60 terabits per Starship launch, three launches per week to deploy 40,000 over three years. The 120,000 V4 plan after that needs the FCC and the ITU to keep approving allocations at unprecedented cadence. That is a permission stack from ground floor to orbit, and it has to be lobbied as aggressively as a chip pipeline.

The closing call
Capability is over as the binding constraint. Permission is where the moats are built. Spectrum, FDA queue position, social licence, organizational shape, and elite labour market access are all moving fast and most teams are not pricing them. The teams that are pricing them have already taken the next two years off the table.
If you are still optimising your AI strategy around model selection and benchmark performance, you have already lost the round. The competitors who treated permission as the actual scarce resource have booked the spectrum, lobbied the moratorium states, sketched the flat-org playbook, and started the n=1 trial pipeline. The next two years are not coming back.
Comments
Loading comments…
Leave a comment