The gap between AI-native software teams and everyone else — and what it takes to cross it.
Boris Cherny, Claude Code lead at Anthropic, confirmed in March 2026 that Claude Code is 100% written by Claude Code. StrongDM ships production software with 3 engineers, zero humans writing code, and $1,000 per engineer per day in AI compute.
A 2025 METR randomised control trial found experienced developers using frontier AI tools completed tasks 19% slower than without. They believed they were 24% faster. Wrong on direction and magnitude.
This talk is about the gap between those two realities — what's causing it, what the transition actually looks like, and why most organisations are misreading the signals.
This session is built for the people responsible for engineering capability and delivery — not developers in general, but the leaders making decisions about how AI fits into how their teams work.
Making investment decisions on AI tooling, team structure, and capability. This session gives them the framework to evaluate where they are and a clear-eyed view of what the transition requires.
Running teams day-to-day and feeling the gap between the AI productivity narrative and what they actually see. This session names the J-curve and gives them language and evidence to work with.
Building the internal infrastructure that AI-native development depends on. This session connects their work directly to the org-level transition — and explains why CI/CD and toolchain choices are now strategic decisions.
Dan Shapiro's Five Levels of Vibe Coding — from spicy autocomplete to the dark factory — gives leaders a concrete framework to assess their current state. Most discover they're two levels behind where they thought they were.
The DORA 2024 data on the J-curve — why every 25% increase in AI adoption correlates with a short-term drop in throughput and stability, and why organisations that push through it come out the other side ahead.
A detailed breakdown of StrongDM's architecture — external scenarios as holdout sets, digital twin universe, no humans writing or reviewing code — so leaders can distinguish genuine AI-native practice from marketing noise.
You cannot dark factory your way through a legacy system. A four-stage migration path for organisations with real codebases, real teams, and real constraints — starting where they are, not where they wish they were.
The bottleneck has moved from implementation speed to specification quality and AI-native execution. Concrete guidance on where to direct engineering investment, upskilling, and org design in 2026.
Junior developer employment down 67% in the US. AI-native startups generating 5–6× the revenue per employee of traditional SaaS. The structural shifts that make this transition urgent — not optional.
Every claim in this session is sourced. The talk draws on peer-reviewed research, industry data, and first-hand accounts from the teams operating at the frontier.
Slower. Experienced developers using frontier AI tools on their own codebases.
Of Claude Code written by Claude Code. Confirmed by Boris Cherny, March 2026.
Decline in US junior developer job postings since peak.
Revenue per employee at top AI-native startups vs $610K SaaS average.
Open with the two realities — the dark factory frontier and the METR slowdown. Sets up the core tension and establishes that this isn't a binary "AI works / AI doesn't work" debate.
Dan Shapiro's framework. Where the audience sits, where the ceiling is, and what distinguishes each level. The psychological barrier at Level 3 — letting go of the code — is where most teams stall.
Inside StrongDM's architecture. External scenarios, digital twins, $1k/engineer/day compute. The hyperscalers — Anthropic and OpenAI — building their own tools with their own tools.
The J-curve. The Copilot trap. Org structures designed for a world where humans write code. The talent cliff. The economics of AI-native companies and what they imply for everyone else.
The brownfield migration. Where to invest. Spec quality and AI-native execution as the new bottleneck. Practical guidance for organisations starting where they are, not where they wish they were.
15 minutes structured Q&A. Kevin has a comprehensive briefing document covering the most commonly challenged claims — METR study design, data sourcing, confidence levels — and handles hard questions directly.
Kevin Ryan & Associates — AI-Native · Platform Engineering · Author
"I used to direct teams of software engineers. Now I coordinate AI agents."
30 years in enterprise technology. 14 professional certifications including GitLab ×9 and GitHub ×4. 40+ enterprise clients and £20m+ in programme budgets delivered. Currently writing Spec Driven Development (sddbook.com) — a book directly addressing the spec quality bottleneck this talk describes. Published author of AI Immigrants. Remote-first. Budapest · Dublin · London.
If this sounds right for your audience, get in touch. We'll talk through the fit, the format, and what works for your context.