sdlcnext.com

Blog

25 posts

Opus 4.7: New Model, or Old Model Rebranded

Evidence that Anthropic degraded Opus 4.6 before shipping 4.7, and why the opacity matters more than the benchmarks.

claude anthropic opus-4-7 model-behavior ai-transparency
Read more →

Agent Teams: The 4x Token Bet That Ends Rubber-Stamp Reviews

Claude Code's Agent Teams spawn independent AI sessions that argue with each other. Four reviewers with adversarial debate clauses find what one reviewer misses, at 4x the token cost.

claude-code agent-teams code-review ai-agents adversarial-review
Read more →

Intelligence Is Not a Moat. Your Data Is.

AI hyperscalers can't defend on model quality or developer tools. The real play is data gravity, and it looks a lot like becoming a cloud provider.

ai cloud lock-in anthropic commoditization data-gravity
Read more →

Uber's 20-Million-Worker Bet Against the AI Transition

At Abundance360, Dara Khosrowshahi put a number on Uber's labor hedge: 10 million workers today, 20 million by 2035, even as automation accelerates. It is the most specific corporate plan for the AI transition from any major CEO on record.

moonshots uber ai automation labor dara-khosrowshahi gig-economy platforms
Read more →

Uber's Bet: Humans Are Harder to Platform Than Robots

Dara Khosrowshahi has 20+ AV partners, 15 cities by year-end, and a target of running more robotaxi rides than anyone by 2029. The deepest line of the episode is the one that inverts the whole robotaxi race: humans are harder to platform than robots.

moonshots uber robotaxi autonomous-vehicles dara-khosrowshahi waymo joby platforms
Read more →

Your Best AI Agents Should Fight Each Other

Sixty years of team research and a decade of multi-agent AI literature converge on the same answer: harmony kills decision quality. The economics of adversarial AI agents now make structured friction the default for serious multi-agent architecture.

ai-agents multi-agent-systems agentic-coding team-architecture adversarial-ai
Read more →

Moonshots Ep. 242: Elon's TerraFab, the End of Human Driving, and Chamath's Terminal Value Warning

Elon announced a 1-terawatt chip factory across Tesla, XAI, and SpaceX. Waymo crossed 170 million autonomous miles. And Chamath warned the S&P's 22x free cash flow multiple is on track to compress to 7x or worse as AI dissolves every moat in sight.

moonshots terrafab elon-musk robotaxi terminal-value ai-infrastructure chamath
Read more →

Moonshots Ep. 241: Eric Schmidt on the 92-Gigawatt Wall and the San Francisco Consensus

Eric Schmidt at Abundance360 says we're 10 to 15 percent into AI's impact, recursive self-improvement is still an open scientific problem, and America's binding constraint is 60 nuclear plants worth of electricity it does not have.

ai-agents moonshots eric-schmidt recursive-self-improvement ai-infrastructure robotics
Read more →

Frontier AI Is Profitable Only If You Don't Show Up

Every frontier AI lab except Google loses money serving active users. The $20/month subscription works only because most subscribers barely use it. Here's what the numbers actually say.

ai economics anthropic openai google llm unit-economics
Read more →

Personal AI Agents Are the Most Privileged Software You've Ever Run

OpenClaw's 288 security advisories and a 1-click RCE show what happens when personal AI agents get broad tool access without matching security hygiene. The controls exist. Anthropic's own computer-use tool mandates most of them. Deployment is the gap.

ai-agents security openclaw prompt-injection agentic-systems
Read more →

Moonshots Ep. 240: Jensen's Trillion-Dollar Bet, Anthropic's Enterprise Win, and the CS Job Market in Free Fall

NVIDIA sold a trillion dollars of future compute. Anthropic captured 73% of first-time enterprise customers. CS graduates are placed at 19%. The pace is no longer a prediction. It's the ledger.

nvidia anthropic moonshots future-of-work openclaw
Read more →

Moonshots Ep. 239: The Hard Takeoff Has Already Begun

Elon Musk confirms recursive self-improvement is already underway, Optimus 3 starts production this summer, and a 10x global economy within a decade is, in his words, a 'fairly comfortable prediction.'

moonshots recursive-self-improvement robotics singularity future-of-work elon-musk
Read more →

Your AI Code Factory Is a Funnel. Start Treating It Like One.

Most engineering teams adopt AI coding patterns by opinion. The alternative: instrument your code generation pipeline before you commit to any framework, model, or toolchain.

ai-engineering agentic-coding developer-productivity measurement
Read more →

Moonshots Ep. 238: The Internet Is Being Rebuilt for Agents, Not Humans

Meta's acquisition of Moltbook and GPT-5.4's math breakthrough aren't separate stories. They're the same story: the network effects that built the web are now firing for trillions of AI agents.

ai-agents future-of-work moonshots gpt-5 recursive-self-improvement
Read more →

Moonshots Ep. 237: OpenClaw and the personal AI agent revolution

Personal AI agents running locally on commodity hardware are collapsing the cost of building autonomous organizations to almost zero. Here's what that actually means for how work gets done.

ai-agents local-ai automation future-of-work moonshots
Read more →

AI Won't Replace Engineers. It Will Unleash Them on Everything Else.

The 'AI is coming for engineering jobs' narrative misreads the history of the profession. Software engineers have always automated themselves out of repetitive work. If AI frees up their time, they won't disappear — they'll start automating every other knowledge function in the organisation.

AI software-engineering automation knowledge-work strategy
Read more →

How Agile Must Evolve When Implementation Is Cheap

Agile was designed around the assumption that writing code was the bottleneck. AI broke that assumption. Here is what needs to change.

agile AI methodology software-delivery
Read more →

Spec-Driven Development: The Missing Link in AI Coding?

If unstructured prompting has plateaued at 10% productivity gains, the obvious next question is: what happens when you give AI better instructions? Spec-driven development is the most serious answer to that question so far.

spec-driven-development AI methodology software-engineering
Read more →

The Research Gap: Why AI Coding Fails Before a Single Line Is Written

The biggest source of AI-generated code failures isn't the model, the prompt, or the tool. It's what happens — or doesn't happen — before the implementation stage begins. Context preparation is the missing discipline.

AI research context software-engineering methodology
Read more →

Don't Use AI to Build Faster. Use It to Learn Faster.

Every productivity study in this series points to the same ceiling: modest gains on existing codebases, instability risks, plateauing returns. The real opportunity isn't optimisation. It's experimentation.

AI strategy product software-engineering
Read more →
Slides

AI & Developer Productivity: The Real Numbers

93% of developers use AI coding tools — but productivity gains have plateaued at ~10%. A data-driven analysis of 121,000 developers across 450+ companies: what the research actually shows, what's working, and what the strategic opportunity really is.

AI developer-productivity DORA spec-driven-development data
Read more →

Where AI Actually Delivers ROI: A Practical Guide

Not all AI investment is equal. The data on who benefits most, which use cases have the best returns, and what organisational foundations have to be in place before AI delivers at all.

AI developer-productivity ROI engineering-leadership devex
Read more →

Same Tools, Wildly Different Outcomes

Data from 67,000 developers shows that AI acts as an amplifier — it makes good engineering organisations better and struggling ones worse. This is a management problem, not a tooling problem.

AI engineering-leadership developer-productivity devex
Read more →

What the Research Actually Shows: DORA, METR, and GitClear

Three independent research programmes have now published rigorous data on AI's impact on software delivery. The results are more nuanced — and more concerning in places — than the vendor claims suggest.

AI DORA research data code-quality
Read more →

The AI Productivity Paradox: 93% Adoption, 10% Gains

The data is in from 121,000 developers across 450+ companies. AI adoption is near-universal — but productivity gains have been stuck at around 10% for over a year. Here's why that gap exists and what it actually means.

AI developer-productivity data research
Read more →