sdlcnext.com
← All posts
moonshots ai-policy ai-infrastructure future-of-work personalized-medicine

Moonshots Ep. 248: Permission Is the Bottleneck Now

Capability shipped on time. What slipped is licence. Episode 248's actual throughline is that the AI race is now an institutions race.


Viewpoint

A 20-year-old Texan threw a Molotov cocktail at Sam Altman’s house on April 10. Three days later someone fired shots at his Russian Hill property. The same week Maine passed the first statewide data center moratorium in U.S. history, and Festus, Missouri voters fired four city council members for approving a $6 billion data center. None of it had anything to do with capability.

That is the throughline of Moonshots Episode 248. Capability shipped on time. Anthropic dropped Opus 4.7 the morning of the recording, with the knobs deprecated in favour of natural-language prompts and a 3x bump in input image size. Google’s TurboQuant landed a 6x memory reduction and 8x context boost, then got reverse-engineered into open source within a week by developers pointing Claude Code at the paper. Sid Sijbrandij dumped 25 terabytes of his own body data into ChatGPT, found an off-label drug that 19 oncologists had missed, and has been relapse-free since 2025. Capability is fine. What slipped is permission.

Capability is no longer the constraint

Opus 4.7 is, in Alex Wissner-Gross’s verdict, “a solid point release of Opus.” The interesting part is what got removed. Temperature is gone. Reasoning-token budgets are gone. The migration notes between 4.6 and 4.7 read like a deletion log: dials replaced with prompts, parameters replaced with natural language. If you want determinism, the documentation tells you to forget about it.

The capability curve stopped being a chip story. Anthropic’s new paper on weaker models supervising stronger ones works as a tower-of-alignment proxy for human oversight of superintelligence. TurboQuant takes the KV cache to one bit per parameter, and Dave Blundin and Alex are arguing about whether ternary at 1.58 bits is the optimal architecture or just the next stop on the way to a sub-binary numerical paradigm. Eight times the effective brain memory thinking about a single problem in a single moment. None of that hit a chip wall.

Capability curve continuing while permission frictions stack up

The actual rate-limiter is permission

Stanford’s 2026 AI Index pulled the public data into one place: SWE benchmark from 60% to 97%, GenAI at 53% global adoption in three years, model transparency score down from 58 to 40, AI incidents up from 233 to 362, public optimism stuck at 23% against expert optimism at 73%. Alex’s hot take was “too little, too late, too frequent” because annual reports cannot capture a daily-cadence revolution. The numbers pointing at a permission problem are the only ones still moving.

Maine moved first: an 18-month statewide moratorium on new data centers, time for “the task force to study impact,” which in practice is time for everyone else to lock in their queue position. Between March and June of 2025, $98 billion of data center projects got blocked or delayed by community opposition. Eleven states have active legislation drafted for moratoriums. New Hampshire went the other way and passed an AI right to compute, which is how you can tell this is now a 50-state policy fight.

Alex’s perverse read: thank the data-center-ban states. They are accelerating the orbital Dyson swarm. If the regulatory regime keeps pushing AI compute out of suburban Festus and into low Earth orbit, the U.S. wins by default on the sun-synchronous-orbit timeline. That is only true if the orbital build-out gets through. If it doesn’t, the bans are exactly what they look like, and the win goes to whoever holds the spectrum.

The youth jobs freeze nobody can vote against

U.S. software developer employment in the 22-to-25 age bracket has dropped nearly 20% since 2024. Older developers, age 30 and up, gained headcount over the same period. Customer service, legal support, administrative roles all show the same shape on Stanford’s chart: a jagged downward line for early career, a gentle upward line for everyone else.

The dangerous part is the mechanism. Companies are not firing young workers. They are not hiring them in the first place. There is no unemployment spike, no layoff event, no policy trigger. Standard labour market monitoring picks up zero signal. As Peter put it, the drop is politically invisible.

Salim’s framing was Arab Spring. Young men, no job, no house, no family, an angry cohort with nothing to lose. Dave’s MIT story landed harder. Three Princeton chip-design seniors, all with offers (NVIDIA, banking, a grad school slot), every offer the worst choice they could possibly make in this moment because the brainpower is a commodity within two years post-ASI. The advice was: stick together, start a company, surf the closing window before it shuts.

22-25 developer employment dropping ~20% with no layoff signal

Coase’s law dies on Jack Dorsey’s whiteboard

Jack Dorsey wants 6,000 direct reports at Block. Three roles only: IC/operator, manager, leader. The org chart goes from a hierarchy of supervision to what Salim calls a network of intent, with AI as the translation layer.

Alex’s read: 6,000 direct reports means zero direct reports. The Dunbar limit is around 150. Anything past that and the AI is the actual CEO, with Jack as a figurehead training it with every interaction he oversees. Salim’s running organizational singularity paper is documenting the shape. Machine mediation collapses management bandwidth. Transaction costs leave the firm faster than they come in. Jack Welch’s 2000 annual report line (“the minute the metabolism of your company is slower than the outside world, you’re dead”) gets cited as if it were 2026 commentary.

The Macrohard side of the same argument is Tesla and XAI building a system that observes employee keyboard and mouse input across a target company and trains agents to replace the operations wholesale. You install Macrohard, it watches, it absorbs. The permission problem is whether the legal and labour structures that built mid-market firms can survive a buyer that prices the firm by the scale of its tacit knowledge.

Old org pyramid vs Dorsey's flat 6000-report structure with AI as middle layer

Personalized medicine is a permission problem now

Sid Sijbrandij founded GitLab. He got told stage-four cancer would kill him. He stopped being a patient and became a founder. He built a private team of oncologists, fed 25 terabytes of his own body data into ChatGPT, and the model surfaced an off-label drug approved for a different cancer that no one had tried on his type. His team built 19 custom DNA vaccines, each one tuned to attack only his cancer cells. Relapse-free since 2025.

The science is solved. The bottleneck is FDA structure. The agency is engineered for population medicine, not for n=1. Under current leadership it has moved from two clinical trials to one, started accepting Bayesian statistics in some cases, and David Fajgenbaum’s repurposing project at Penn is finding cures by running already-approved drugs against new diseases. None of that scales the way Sijbrandij’s case scales. His n=1 was made possible by a $14 billion personal exit. The permission stack from FDA approval down to insurance reimbursement is what determines whether the next 10 billion get the same shot.

Salim’s “How dare you?” Greta moment was aimed at Pause AI. If AI is the new bottleneck on cures, then advocating for slowing it down is a position that has to defend itself against every individual outcome it would have prevented. Whether that calculus holds at population scale is a separate argument. At n=1 it is not close.

The strategic moves are about licence, not silicon

Amazon paid $11.57 billion for Globalstar. The headline was 25 satellites against Starlink’s 10,000. The actual asset was 25.225 megahertz of ITU-cleared 2.4 gigahertz spectrum across 120 countries. Phone-direct connectivity that does not get blocked by your fingers, because the wavelength is short enough to pass through a hand.

The reason Amazon paid premium was that the spectrum is no longer available for anyone else. Apple wanted a second satellite vendor for the same Verizon-versus-T-Mobile reason it wants a second supplier for everything. Verizon and T-Mobile are at the terrestrial layer. The new fight is Starlink versus Amazon-via-Globalstar at the orbital layer, and Apple’s iPhone and Apple Watch are the wedge into both.

Starlink’s V3 deploys at 1 terabit per second downlink per satellite, 60 terabits added per Starship launch. The 40,000-satellite plan needs three Starship launches per week over three years. That is a logistics problem, not a chip problem. The 120,000-satellite V4 plan after that is still a logistics problem. The permission stack here is FCC, ITU, and the launch licence cadence at Boca Chica.

The permission stack: silicon abundant at the base, social licence scarce at the top

The honest counterargument

The capability case is still real. Opus 4.7’s bio benchmark progress is non-trivial. TurboQuant’s open-source reverse-engineering means the hardware moat shrinks every quarter. The alignment-supervision loop is pushing on the most important capability problem there is. None of that goes away because Maine passed a moratorium.

The harder argument is that permission frictions are mostly protective. The Pause AI Discord server is downstream of a real public concern. The 23% optimism number is a leading indicator, not noise. Stanford is right to track AI incidents climbing, even if the cadence is wrong. If the FDA collapsed from two trials to zero overnight, the upside is fewer Sijbrandij-grade individual outcomes lost, and the downside is industrial-scale unsafe drugs reaching market faster than the surveillance system can detect harm. The panel’s “go entrepreneurial, surf the singularity” framing under-weights what failure mode looks like at scale.

Both can be true. Capability is not the constraint anymore, and permission frictions are mostly protective. The optimization problem is which permissions to relax and in what order. That problem is not technical and is not getting solved by a frontier lab.

What 248 actually argued is that the AI race is now an institutions race. Spectrum, regulatory regime, organizational shape, and social licence are the variables that matter. The team that wins the next two years is the team that gets a working permission stack first.


Sources

Comments

Loading comments…

Leave a comment