Verification Scarcity: A Systems Model of Agentic AI Constraints
Version 2.0 —
Jim MontgomeryPreamble
This document emerged from constraint-based analysis of agentic AI
deployment — not from literature review. The conclusions below follow
from first principles: thermodynamics, information theory, computational
complexity, and organizational economics. Where empirical data confirms
the structural predictions, it is cited. Where empirical data is
incomplete, unknowns are explicitly marked
[UNKNOWN: description].
The framework is not a critique of AI capability. It is a derivation of what the physical and mathematical structure of these systems requires and forbids, and what the organizational conditions are under which reliable deployment is possible at all.
I. The Physical Layer: Thermodynamic Constraints
1.1 Landauer’s Floor
Every logical operation has a minimum energy cost. The lower bound is set by Landauer’s Principle:
Where is Boltzmann’s constant and is temperature. This is not an engineering problem — it is a physical invariant. No architectural improvement eliminates it.
1.2 Operational Power Model
The total power cost of inference at scale:
- : capacitance
- : switching frequency
- : supply voltage
- : leakage current (dominates as approaches threshold)
- : Power Usage Effectiveness
As approaches the threshold voltage, dominates, creating a hard physical floor on the cost per inference operation. Architectural improvements (sparse models, neuromorphic hardware, edge compute) shift the curve but cannot cross the floor.
1.3 The Jevons Paradox of Compute
Efficiency gains do not resolve the energy constraint. As inference cost per operation decreases, demand increases by a proportionally greater amount, producing net energy consumption growth:
The scale of this dynamic is now measurable. US data centers consumed approximately 4.4% of total national electricity in 2023 and are projected by the US Department of Energy’s Lawrence Berkeley National Laboratory to reach 6.7%–12% by 2028 (DOE/LBNL, 2024). Globally, the IEA projects data center electricity consumption will double to 945 TWh by 2030, with AI-focused data centers growing at 30% annually (IEA, Energy and AI, 2025). In 2025, data centers accounted for approximately 50% of all US electricity demand growth (IEA / Fortune, April 2026).
The Gigawatt Ceiling is therefore a demand-side constraint as much as a supply-side one. Infrastructure buildout accelerates utilization faster than grid capacity scales.
II. The Capital Layer: Hardware Obsolescence Cascade
2.1 The Stranded Asset Mechanism
Data center capital is financed on mismatched amortization schedules:
- Building depreciation: 20–40 years
- Hardware depreciation: 3–7 years
- AI hardware economic obsolescence (accelerating): currently compressing toward 2–3 years
When next-generation architecture renders current inference hardware economically uncompetitive before debt service completes, the asset’s revenue-generating capacity falls below its financing cost. The debt does not obsolete with the hardware.
2.2 Capital Entropy
Let be the economic value of deployed hardware at time , and the outstanding debt obligation:
When architectural obsolescence drives below before the amortization schedule completes, .
The scale of the exposure is significant. Goldman Sachs’ baseline model projects $765 billion in annual AI CapEx in 2026 across compute, data centers, and power, growing toward $1.6 trillion annually by 2031 (Goldman Sachs, April 2026). Big-5 hyperscaler spending alone reached approximately $725 billion in 2026 following Q1 earnings revisions (CFA Analysis, April 2026). IEA notes that five large technology companies surged capex to over $400 billion in 2025 and set it to increase by a further 75% in 2026 (IEA, April 2026).
At these investment magnitudes, even a modest debt-financed fraction at 7-year schedules against a 2–3 year economic obsolescence horizon produces stranded exposure in the hundreds of billions USD.
Historical analogs: Telecom dark fiber overbuild (1999–2001); shale debt structured at $100/bbl against collapsed oil prices. Both produced cascading non-performance of loans and destruction of lender balance sheets.
[UNKNOWN: Precise debt-financing fraction of 2026 AI infrastructure capex and lender concentration — required to size the cascade exposure accurately]
2.3 Interaction with Thermodynamic Constraint
Organizations servicing stranded hardware debt while simultaneously absorbing exponential verification overhead (Section IV) face a bilateral cost squeeze: the capital cost of past infrastructure and the operational cost of present verification both compound against revenue.
III. The Reliability Layer: Geometric Decay
3.1 Base Model
For an -step agentic workflow where each step has independent success probability , system reliability is:
This is a geometric series with no floor above zero. At , :
A 99%-accurate agent executing a 100-step workflow produces reliable output only 36.6% of the time. This is not an edge case — it is the central operating reality of any sufficiently complex agentic pipeline.
Deployment data confirms the structural prediction. Gartner projects over 40% of agentic AI projects will be canceled by end of 2027 due to escalating costs, unclear ROI, or inadequate risk controls (Gartner, June 2025). Separately, Forrester and Anaconda 2026 data show 88% of agent pilots failing to reach production (Digital Applied, April 2026). Meanwhile, PwC’s 2026 AI Performance Study of 1,217 senior executives confirms that 74% of AI’s economic value is captured by just 20% of organizations, with the majority still trapped in pilot mode (PwC, April 2026).
3.2 The Precision Requirement
To maintain a target reliability as complexity increases, the required per-step precision is:
As grows, . The precision requirement imposed by complexity scales faster than any realistic model improvement trajectory.
3.3 Modified Model: Checkpointing with Correction
Insert verification checkpoints at intervals of steps, each with correction probability :
Key result: As , regardless of . Checkpointing with high-fidelity correction breaks geometric decay.
Critical constraint: is the expensive variable. It requires a human verifier with sufficient domain expertise to distinguish a correct output from a plausible-but-wrong one. Low-authority or low-expertise verification returns , collapsing back toward .
3.4 Modified Model: Parallel Redundancy
Run independent chains, accept best or majority result:
At , base improves to . Cost scales linearly with . Parallel redundancy buys reliability at proportional compute cost but does not break the underlying decay.
3.5 Unified Reliability Model
Combining checkpointing and redundancy:
At each of checkpoints, parallel chains each fail and go uncorrected with probability . The checkpoint is lost only if all chains fail uncorrected: . This is consistent with the correction model in Section 3.3; remains an independent correction event, not a multiplier on per-step success probability.
The only lever that breaks geometric decay without proportionally scaling compute cost is . All other optimizations (parallelism, checkpointing without correction) are cost multipliers on the same degrading base. The escape from reliability decay is not an engineering problem — it is an organizational one.
3.6 Time-Variant Precision Decay
is not static. As models are trained on increasing volumes of agent-generated synthetic data, the Kullback-Leibler divergence between the training distribution and ground truth grows:
Where is the ground-truth distribution and is the synthetic-data-contaminated training distribution. As , the model loses grounding in physical reality (stochastic drift). This adds a time derivative to :
Where is the contamination rate. The geometric decay in Section 3.1 therefore accelerates over time independent of chain length. Reliability is a function of both and .
[UNKNOWN: Empirical measurement of γ for current production model families — requires longitudinal benchmark tracking against verified ground-truth datasets]
IV. The Verification Layer: Cost Asymmetry
4.1 The Fundamental Asymmetry
Generating an agentic output is computationally . Verifying that output — particularly detecting plausible-but-wrong results — approaches -hard for sufficiently complex outputs. The generation-verification cost ratio is therefore not fixed; it worsens as output complexity increases.
Formally, verification cost as a function of output complexity and human cognitive bandwidth :
Verification cost grows faster than generation cost as task complexity increases. This is the structural ceiling on agentic ROI.
4.2 The Admissibility Gap
In high-stakes domains, outputs must be not merely accurate but auditable — bound to a deterministic evidence chain. The gap between outputs that appear audit-shaped (citations, professional prose, specific numbers) and outputs that are actually admissible (bound to verifiable, resolvable evidence) is the Admissibility Gap.
AI systems produce audit-shaped outputs at high volume. The human cost of determining admissibility scales with volume. At sufficient volume, the verification budget is exhausted and admissibility checking becomes stochastic — which means high-credibility errors travel further before detection.
4.3 Verification Budget as Finite Resource
Holding a community’s verification capacity fixed, any increase in agentic output volume mechanically dilutes verification per claim :
This is not a resourcing problem that scales away with hiring. Verification requires domain expertise with long formation timelines. The resource is structurally scarce.
V. The Organizational Layer: Authority-Expertise Decoupling
5.1 The Principal-Agent-AI Three-Node System
Traditional principal-agent problems involve two nodes: Principal (management) → Agent (employee). Agentic AI introduces a third: Principal (management) → Expert (verifier) → Agent (AI).
When the Expert is denied decision authority, Verification Latency is introduced. Total task cost:
- : labor cost to verify at hierarchy level
- : output uncertainty (entropy) at level
- : delay constant from bureaucratic decoupling
- : number of hierarchical hops between agent output and decision authority
Cost grows exponentially with . At (authority co-located with expertise at the verification point), .
5.2 The Equivalence: Authority Index = Correction Probability
The precision decay model (Section 3.3) and the organizational cost model (Section 5.1) express the same constraint in different frames.
Let be the Authority Index — the degree to which a verifier has decision authority over the output they are evaluating.
When (the expert is an observer with no authority), the incentive to perform high-fidelity verification collapses (moral hazard). The precision decay equation:
At : — maximum decay. At : — decay halted by motivated expert correction.
The equivalence: in the reliability model and in the organizational model are the same variable. Organizational structure is not a soft consideration adjacent to the technical reliability problem. It is a direct input to .
5.3 The Fundamental Invariant
When decision authority is co-located with domain expertise at the point where information and verification intersect, verification latency , correction probability , and the reliability model escapes geometric decay.
This invariant holds wherever authority-expertise co-location is achieved — across domains, organizational sizes, and industries. The specific organizational form is an instance of the invariant, not the invariant itself. The invariant is the thing.
5.4 Adverse Selection of Output
Organizations with (authority-expertise decoupling) do not simply fail to verify — they systematically select against the outputs most in need of expert judgment. High-complexity, high-value agentic outputs require the most expertise to evaluate. Without authority at the expertise level, organizations filter these out in favor of low-complexity, easily-signable outputs.
Result: the measurable productivity gains from agentic AI accrue to organizations with and disappear into verification overhead for organizations with . The PwC 74/20 split is the empirical expression of this selection effect.
VI. The Integrated System Model
6.1 Net Utility Function
Agentic system viability over time:
| Term | Definition |
|---|---|
| Gross value: function of Autonomy and Data Fidelity | |
| Energy cost: | |
| Verification cost: labor against uncertainty | |
| Maintenance: hardware amortization and model retraining |
System survives only if . The stranded asset cascade (Section II) adds a time-indexed debt service term to the cost side, further compressing the window of viability for organizations carrying hardware debt against accelerating obsolescence.
6.2 The Trust-Autonomy Duality
Autonomy and reliability are coupled, not independent:
- : complexity factor (increases with chain length)
- : human correction rate (function of expert authority )
As grows (model contamination) and falls (authority decoupling), increases without bound. Autonomy growth is eventually overwhelmed by entropy growth.
6.3 The Two Interacting Entropic Systems
Two distinct decay processes interact multiplicatively:
| System | Mechanism | Metric |
|---|---|---|
| Agentic Entropy | Agents optimize for local correctness, eroding global architectural intent | Stochastic drift: local success masks global failure |
| Cognitive Debt | Human supervisors lose system-level mental model as AI velocity exceeds comprehension bandwidth | Oversight collapse: loss of capacity to detect the next wave of entropy |
| Interaction | increases opacity → deepens → prevents detection of next | Non-linear amplification: each system’s failure accelerates the other |
The interaction term is multiplicative, not additive. This follows directly from the framework in Section 5: undetected error accumulation = errors generated × (1 − correction probability). Correction probability degrades as cognitive debt increases. Therefore:
This is a mathematical identity given those definitions — not an empirical assertion. If either factor is zero (no entropy generated, or full correction capacity intact), the compound failure mode does not occur. Additive formulations lack this property: they produce nonzero total entropy even when one system is at zero, which is not consistent with the coupling mechanism described above.
The same multiplicative structure appears across complex systems failure literature: Reason’s Swiss Cheese Model (1990), Perrow’s Normal Accidents (1984), and Shannon’s noisy channel (error rates compounding multiplicatively through chained channels). All share the same underlying form: hazard introduction rate × probability of escaping detection = multiplicative, as a structural identity.
As agentic entropy makes the system more complex and opaque, the cognitive debt of the verifier deepens. Deepened cognitive debt prevents detection and correction of the next entropy wave. The loop is self-reinforcing and accelerates without an external corrective force — which is, again, : expert authority at the verification point.
VII. The Labor Value Inversion
7.1 The Counter-Narrative Emergent
The dominant public frame asserts that AI displaces knowledge workers. The model produces the opposite structural conclusion.
— the only lever that breaks geometric reliability decay without proportionally scaling cost — requires three things that cannot be automated away:
Domain expertise sufficient to distinguish correct from plausible-but-wrong output. An agent cannot verify another agent’s output against ground truth it doesn’t possess. Only a human with domain expertise can close this loop.
Systems thinking sufficient to detect local correctness masking global architectural failure. This is the Cognitive Debt problem inverted: the same capability that is destroyed by AI velocity in organizations with becomes the irreplaceable asset in organizations with .
Intuition — the pattern recognition capability that operates below the threshold of articulable rules — sufficient to identify when an output is audit-shaped but inadmissible. This is precisely what long formation in a domain builds and what no training run replicates, because it is grounded in physical and social reality, not in the token distribution of prior outputs.
These are not peripheral skills. They are the structural inputs to the only escape from .
7.2 The Empirical Confirmation
Software engineering roles in 2026 confirm the prediction. Senior engineers with systems judgment are increasing in market value. Junior developers whose primary function is producing outputs that look correct are being restructured out — replaced by agents that produce the same class of output at lower marginal cost.
This is not a talent market fluctuation. It is the labor market expressing the mathematical constraint. The model predicted it before the market showed it. The same dynamic will propagate through any domain characterized by high (complex multi-step workflows), high risk (models operating far from verified ground truth), and currently low (authority-expertise decoupling). Healthcare, law, engineering design, and financial analysis are the next wave.
7.3 The Formal Statement
Let be the market value of a labor input:
Labor value is a direct function of verification capability under authority. As agentic output volume increases across the economy, -capable labor becomes scarcer relative to demand, increasing its price. Simultaneously, labor whose primary output is indistinguishable from agentic output — pattern-matching, first-draft generation, routine summarization — is displaced.
The inversion is not symmetric. The increase in value for -capable labor is driven by the mathematical structure of the reliability problem, which gets harder as agentic deployment deepens. The displacement of substitutable labor is driven by cost pressure. Both are mandatory outcomes of the same underlying system.
VIII. Structural Conclusions
1. Reliability decay is mathematically required for any agentic pipeline of sufficient complexity. No model improvement escapes without external correction. The escape requires , which requires expert authority. This is a structural necessity, not a design choice.
2. Verification cost grows faster than generation cost as task complexity increases. This is the structural ceiling on agentic ROI, and it is not resolvable by scaling compute.
3. Organizational structure is a direct input to system reliability, not a management consideration adjacent to it. . Authority-expertise co-location is a technical requirement derivable from the reliability model.
4. Two entropic systems interact multiplicatively. Agentic entropy and cognitive debt amplify each other non-linearly. The only corrective force is expert authority at the verification point.
5. Hardware capital is being structured on mismatched
timelines. Economic obsolescence is outpacing amortization
schedules at investment magnitudes ($765B+ annual AI CapEx in 2026) that
will produce non-performance cascades in the hundreds of billions USD.
[UNKNOWN: Precise exposure size pending debt-financing fraction data]
6. Model contamination adds a time derivative to base
precision.
is not static; it degrades as synthetic training data accumulates.
Reliability is a function of both pipeline complexity
and time
.
[UNKNOWN: Empirical γ for production model families]
7. Labor value inverts against the dominant narrative. The mathematical structure requires expertise, systems thinking, and intuition — precisely the capabilities the displacement narrative treats as vulnerable. The market is already expressing this in software roles, and will propagate through every high-, high- domain.
IX. Open Variables
| Unknown | Description | Resolution Path |
|---|---|---|
| Contamination rate: speed at which synthetic training data degrades base precision | Longitudinal benchmark tracking against verified ground-truth datasets | |
| Debt exposure | Precise debt-financing fraction of 2026 AI infrastructure capex and lender concentration | Financial disclosure analysis; structured finance data |
| Ceiling on correction probability achievable by human expert under full authority | Empirical study of expert-in-loop system performance at high | |
| China capability trajectory | Architectural efficiency under chip access constraints; interaction with internal social contract dynamics | Partially inferrable from public output (DeepSeek, Kimi efficiency gains); military/intelligence application trajectory is not resolvable from open sources |
Document captures the structural model as of 2026-05-02. Mathematical additions and empirical calibration of open variables to follow as data emerges.