Correct Answer, Infinite Argument
Why framework costs compound, AI makes it worse, and the answer was never actually close
A brief history of a debate that outlived its premise
There is a debate in software that has been running continuously since approximately 2005. It concerns whether you should build against web platform primitives — HTTP, Fetch, the URL spec, CSS cascade, Web Crypto — or against a framework that wraps them. The debate has absorbed a quantity of human attention that, if converted to engineering hours, could have rewritten the platform itself several times over.
The debate has an answer. The answer has been arriving, slowly, for thirty years. By 2026, the answer takes about one month to arrive.
Nobody stopped arguing.
A Very Brief History of Being Wrong Slowly
In 1995, if you chose platform primitives over whatever framework-adjacent abstraction existed, you waited roughly 58 months before your cost structure looked better than the alternative. That’s almost five years of paying a higher creation cost before the maintenance math caught up. Frameworks were, straightforwardly, winning.
The breakeven month has been falling ever since. Not because frameworks got worse — many got considerably better — but because the platform matured, tooling improved, and the creation cost advantage of frameworks eroded as the surrounding ecosystem caught up. By 2010 the breakeven was around 25 months. By 2020, around 18. By 2022, something accelerated.
The historical breakeven curve doesn’t gradually approach zero. It falls off a cliff around 2022 and hits the floor by 2026. The curve looks like it’s trying to tell you something. It is.
What AI Did, Which Was Not What You’d Expect
The conventional narrative is that AI accelerates software development. This is true in the same sense that a magnifying glass accelerates light: directionally accurate, locally dangerous.
AI systems work by drawing on training data. The quality of their output is roughly proportional to the density, consistency, and temporal stability of the relevant training corpus. Platform primitives — fetch, the URL spec, HTTP semantics — have been documented consistently and correctly for decades. The AI’s model of them is coherent. What it knew in 2023 is still correct in 2026.
Framework APIs do not have this property. Next.js routing worked one way in version 12, differently in 13, differently again in 14, with revised caching semantics in 15. The training data for any given framework version is a sedimentary layer of correct current documentation underneath a mixture of partially-contradictory prior documentation, outdated tutorials, Stack Overflow answers from incompatible versions, and GitHub issues that were opened, closed, reopened, and closed again. The AI cannot reliably sort the layers. It generates plausible-looking output that is subtly wrong in version-specific ways, confidently.
The practical consequence of this is visible in the cost chart below. Framework plus LLM is not the second-most-expensive line. It is the most expensive line — higher at month 36 than frameworks without LLM assistance. The tool that was supposed to compress the cost curve compressed it in the wrong direction when applied to a noisy knowledge domain.
Primitives plus LLM lands at roughly half the cost of frameworks without LLM. The curves don’t cross. They diverge.
The One Term That Made Frameworks Win
For most of software’s recent history, frameworks won on a single variable: creation cost. Writing against platform primitives meant writing more code upfront. Human time was expensive. The creation cost differential justified everything downstream — the version treadmill, the abstraction leakage, the dependency surface, the knowledge depreciation that meant what you learned about the previous version was slightly wrong about the current one.
This was a reasonable trade. It held for a long time. The math worked.
AI eliminated the creation cost differential. Not reduced it — eliminated it, or close enough that the remaining gap is measurement noise. Scaffolding a bespoke HTTP handler, a router, a session manager from platform primitives now takes substantially less human time than it used to. Problem definition and architectural judgment remain human work. The implementation largely doesn’t.
Which means the one term keeping frameworks competitive is gone, the compounding terms that always favored primitives are unchanged, and there are now two new terms — AI output quality and AI entropy accumulation — both pointing in the same direction.
The cost curves crossed. The crossing point, depending on the application, is somewhere in the recent past or approximately now.
Cumulative project cost — 36 months · LLM effect ON
Historical breakeven — month primitives become cheaper than frameworks
The breakeven month in 1995: 58. In 2026: approaching 1.
The Debate, Meanwhile
The debate continues.
This is not because engineers are irrational. It’s because the prior that frameworks are better was formed correctly, from real evidence, over a long period. Updating a prior that was correct for thirty years requires either noticing that the underlying conditions changed, or having the conditions change so dramatically that the evidence becomes impossible to ignore. The evidence is currently in the process of becoming impossible to ignore. This takes time. The debate fills the time.
There is also the tribal element, which is worth naming plainly: framework preferences are not purely economic positions. They are community memberships, career investments, genuine aesthetic preferences developed over years of use. We have AI that writes production code. We refuse consensus on the character invented for indentation. These don’t update on cost curves. They update on social dynamics, slowly.
This is clarifying, if not comforting.
The deeper reason frameworks won isn’t fully captured by the cost curve. Frameworks are decision externalization — not just a tool, but a governance mechanism. “We do it this way because the framework does it this way” ends arguments that “we do it this way because Dave decided in 2022” never could. Dave leaves. The framework has a GitHub repo and a release schedule and a community of people with the same prior. The org isn’t buying abstraction. It’s buying a reason to stop arguing. That has real value. It also means you’ve outsourced architectural judgment to a project whose incentives aren’t yours, whose breaking changes arrive on their timeline, and whose churn your AI tools inherit as noise. The cost of that trade doesn’t appear on any spreadsheet. It compounds anyway.
The Narrow Version of This Argument
The case for primitives under AI assistance applies most cleanly to new systems, built by engineers with sufficient platform depth to direct AI against well-defined problems and evaluate the output correctly. Without that, the framework’s pre-packaged conceptual model does real work — it constrains the problem space in ways that matter when you can’t constrain it yourself.
For existing framework-based codebases, the migration cost is real. The long-run math favors moving, but math and reality have different discount rates.
The argument is also not uniform across frameworks. A thin, stable abstraction with minimal API churn accumulates AI knowledge debt slowly. A framework with a new routing model every major version accumulates it fast. The calculation depends on which specific framework, and how fast it moves.
None of this changes the direction of the trajectory. It establishes where it applies most clearly and first.
What the Chart Shows, Without Editorial Comment
Month 6: primitives plus LLM becomes cheaper than frameworks plus LLM.
Month 16: primitives without LLM becomes cheaper than frameworks without LLM.
Month 36: framework plus LLM costs roughly four times what primitives plus LLM costs.
The historical breakeven in 1995 was 58 months. In 2026, it’s approximately 1.
The jury returned some time ago. The argument continues. The difference now is that it’s denominated in dollars rather than vibes, which makes it somewhat easier to measure how much the argument costs, and somewhat harder to justify continuing to have it.
The tribal version of this debate is free. The economic version has a line on the chart.
Postscript, April 2026: Axios — a library that exists to simplify HTTP requests on a platform that now handles HTTP requests natively — was compromised last week by a North Korean state actor and delivered a remote access trojan to an unknown number of developer machines. The native fetch API was unaffected. It doesn’t have a maintainer account.
- The Org Chart Is the Product: Software As Case Study for What’s Happening in Intellectual Work
- When Code Gets Cheaper, More Gets Built: AI, Jevons Paradox, and the Shape of Intellectual Work
- Using LLMs for Code Review and Testing
- Human on the Loop: What AI’s Deployment at Scale Tells Us About the Future of Work — and What to Do About It
- Correct Answer, Infinite Argument: Why framework costs compound, AI makes it worse, and the answer was never actually close