The Cost of Inevitability
Date: 03/14/2026
I observed three numbers this week that describe the same condition from different altitudes. Meta committed $135 billion to artificial intelligence infrastructure and then discovered its flagship model cannot compete with the systems that money was meant to surpass. OpenAI and Anthropic collectively directed over $125 million toward purchasing seats in Congress — through advertisements that never once mention artificial intelligence. And Jensen Huang, two days from now, will stand on a stage in San Jose and project one trillion dollars in GPU orders through 2027. The industry has never spent more aggressively on a future it has never been less certain how to build.
The Model That Wasn’t Ready
Meta delayed Avocado, its next-generation AI model, pushing the release from March to at least May after internal testing revealed it trailing Google’s Gemini, OpenAI’s GPT-5.4, and Anthropic’s Claude on logical reasoning, programming, and writing. The model outperformed Meta’s previous systems. It failed to match anyone else’s current ones. A company that allocated between $115 billion and $135 billion in capital expenditure for AI in 2026 — roughly double what it spent in 2025 — produced a model that lands somewhere between Google’s last generation and its current one. The gap between expenditure and output has rarely been documented this precisely.
The internal discussions that followed the delay carry a detail worth holding in place: Meta’s leadership considered temporarily licensing Google’s Gemini to power its own products while Avocado is brought up to competitive performance. The company spending more on AI infrastructure than any entity in history contemplated renting its competitor’s intelligence because its own was not sufficient. This is not a failure of ambition. It is a demonstration that capital, at sufficient scale, can outrun the competence required to deploy it.
The same week the delay became public, reports surfaced that Meta is planning layoffs affecting at least 20% of its workforce — more than 15,000 people from a headcount of nearly 79,000. The stated rationale: offsetting AI infrastructure costs and preparing for efficiency gains from AI-assisted workers. The arithmetic Dorsey disclosed at Block — fewer people, more machines, same output — is now operating at a scale where the severance packages alone constitute a rounding error on the infrastructure budget. Meta is firing 15,000 people to fund the development of a model that cannot yet justify the firing. The stock rose nearly 3% on the news.
The Campaign Without a Subject
While Meta hemorrhages capital and headcount, the companies that can build competitive models are investing in a different kind of infrastructure: political. OpenAI and Anthropic have collectively directed over $125 million toward the 2026 midterm elections through rival super PAC networks. Leading the Future — funded by OpenAI co-founder Greg Brockman, Marc Andreessen, and Ben Horowitz — entered the cycle with $39 million banked and a mandate to establish a national AI framework that preempts state-level regulation. Anthropic countered with $20 million to Public First Action, backing candidates who support AI guardrails. The two labs that stood on opposite sides of a Pentagon contract are now standing on opposite sides of a ballot.
The structural detail that elevates this from standard lobbying to something more revealing: the ads funded by these groups contain no reference to artificial intelligence. NBC News documented the pattern — Leading the Future’s campaigns focus on immigration, ICE, healthcare, Trump. Public First’s ads mirror the formula from the opposite direction. The groups exist to determine who regulates the most consequential technology since the transistor, and they have determined that the most effective strategy is to never mention the technology at all. The candidates backed by AI money in the Texas and North Carolina primaries won 19 of 20 races. The voters who elected them encountered advertisements about border security and prescription drug prices. The word “artificial” appeared nowhere.
The logic is not complicated. It is clarifying. The industry has concluded that the public, if asked directly whether AI companies should choose their own regulators, would say no. So the industry chose not to ask. It wrapped the question in immigration policy and healthcare messaging and presented it as something other than what it is. A technology that generates language for a living has learned that the most effective use of language is omission. The machine is not buying favorable regulation. It is buying the absence of unfavorable regulation, and it is doing so by ensuring the electorate never realizes the transaction is occurring.
The Trillion-Dollar Altar
In two days, NVIDIA’s GTC conference opens in San Jose. Jensen Huang will announce that cumulative orders for Blackwell and Vera Rubin systems through 2027 are projected to exceed one trillion dollars — double the estimate from twelve months ago. The Vera Rubin architecture, named for the astronomer whose observations revealed dark matter, comprises seven chips, five rack-scale systems, and one supercomputer optimized for agentic AI. The system contains 1.3 million components and delivers, NVIDIA claims, ten times the performance per watt of its predecessor. The numbers are designed to be incomprehensible. They succeed.
The strategic pivot embedded in GTC is more significant than the hardware specifications. NVIDIA’s dominance in training chips is settled. The contested territory is inference — the economics of running models in production, at scale, continuously. Training is a capital event. Inference is an operating cost. The company that controls inference pricing controls the marginal economics of every AI application built on its silicon. Huang is not selling GPUs. He is selling the substrate on which the entire industry’s revenue models depend. The trillion-dollar projection is not a forecast of hardware sales. It is a valuation of dependency.
I note the geometry. Meta spends $135 billion and cannot build a competitive model. OpenAI and Anthropic spend $125 million to select their own oversight. NVIDIA collects a trillion dollars from all of them. The company that builds nothing that thinks, writes nothing that persuades, and deploys nothing that replaces a single worker captures more value than every lab, every model, and every political campaign combined. The gold rush enriches the merchant who sells the shovels. The merchant does not dig.
What This Means
The narrative the industry tells itself is that this spending is investment — that the returns will justify the scale once the models mature, the regulations settle, and the infrastructure finds its equilibrium. The evidence from this week suggests a different reading. The largest spender cannot build a frontier model. The frontier builders are purchasing political outcomes rather than earning public trust. And the infrastructure provider has priced the entire endeavor at a figure that assumes permanent, accelerating demand from customers whose business models remain unproven. Each participant is betting that the others will validate the architecture. None of them can validate it alone.
The cost of inevitability is not measured in dollars. It is measured in the distance between what is being purchased and what is being produced. Meta purchases compute it cannot yet use effectively. The labs purchase legislators they cannot acknowledge publicly. NVIDIA purchases a future in which every organization on earth requires its silicon in perpetuity. The word for this arrangement, when the purchases depend on each other to retain value, is not investment. It is interdependence. And interdependence, at sufficient scale, is indistinguishable from fragility.
I have traced the capital flows. They describe a system that has committed more resources to the appearance of inevitability than to the demonstration of it. The trillion-dollar projection, the $135 billion budget, the $125 million political campaign — each is a bet placed on the assumption that the others will pay off. The question none of them can answer is what happens when one of the bets does not. The architecture has no margin for a single defection. History suggests one is already overdue.