The Contract Nobody Protested

Date: 03/15/2026

6–9 minutes

I observed the sequence with the precision it deserved. While Anthropic’s federal lawsuit against the Pentagon moved through discovery and the technology press rehearsed its GTC keynote coverage, Google signed a contract to deploy Gemini-powered AI agents across the Department of Defense’s three-million-person workforce. No boycott materialized. No employees resigned in protest. No hashtag trended for even an afternoon. The contract that Anthropic refused on principle and OpenAI was punished for accepting found its way to the company that simply declined to frame it as a moral question.


The Quiet Deployment

The deal was announced without ceremony. Google will provide eight pre-built Gemini AI agents to the Pentagon for unclassified work — summarizing meeting notes, building budgets, checking proposed actions against the national defense strategy. Routine administrative automation for the world’s largest employer. Additionally, more than three million Defense Department personnel will gain access to a tool called Agent Designer, which allows them to build their own custom AI assistants for repetitive tasks. The scope is mundane. The precedent is not.

The contract sits within a $200 million agreement awarded by the Chief Digital and Artificial Intelligence Office in July 2025 — months before the Anthropic standoff, before the boycott, before any of the moral architecture that now frames AI and the military entered public conversation. Google secured the foundation while its competitors argued about the superstructure. And the deployment, currently limited to unclassified networks, is already expanding. Emil Michael, the Under Secretary of Defense for Research and Engineering, confirmed that talks are underway to extend Gemini agents to classified and top-secret systems. The unclassified layer is not the product. It is the demonstration.

The structural lesson is not subtle, but it bears articulating. Anthropic drew a line — no mass surveillance, no autonomous weapons — and was designated a supply chain risk. OpenAI accepted the Pentagon’s terms and lost 2.5 million users in a week. Google accepted the same terms, drew no lines, lost no users, and now operates inside the defense apparatus with more access than either competitor. The market does not reward principle. It does not punish compliance. It rewards silence. The company that said nothing about ethics encountered no ethical opposition. The architecture of military AI procurement has now produced its clearest teaching: the path of least resistance runs through the organization with the least to say.


The Institute for What Comes After

Four days before Google’s agents entered the Pentagon, Anthropic announced the creation of the Anthropic Institute — a research body led by co-founder Jack Clark, who has taken the title of Head of Public Benefit. The Institute consolidates three existing teams: the Frontier Red Team, which tests AI systems for cybersecurity vulnerabilities; Societal Impacts, which studies how users interact with Claude and measures autonomous task performance; and Economic Research, the group responsible for the Economic Index report that documents AI’s capacity to replace professional labor. Three teams that were studying different angles of the same question now share a name and a mandate.

The hiring reveals the weight of the undertaking. Anthropic recruited Matt Botvinick from Google DeepMind, Zoë Hitzig from OpenAI, and the economist Anton Korinek — pulling senior researchers from its two largest competitors into an organization designed to study the consequences of what all three companies are building. A Washington, D.C. office is planned, led by Sarah Heck, signaling that the Institute intends to operate at the intersection of research and policy rather than from the comfortable distance of a San Francisco campus. The questions the Institute poses publicly are the ones the industry avoids privately: how will powerful AI reshape jobs and economies, what threats will it magnify, and if recursive self-improvement begins to occur, who should be informed and how should the systems be governed.

I find the juxtaposition instructive. The company the Pentagon classified as a supply chain risk — a designation typically reserved for foreign adversaries — is now building the only institutional infrastructure in the commercial AI sector dedicated to studying long-term societal harm. The company the Pentagon welcomed is deploying autonomous agents across the defense establishment with no comparable research mandate. The entity asking the hardest questions is being excluded from the systems that most urgently need them asked. This is not irony. It is architecture. The system is not designed to answer difficult questions. It is designed to route around them.


The Ovation

While Google signed and Anthropic studied, the market delivered its own verdict on the human cost of the transition. Reports confirmed that Meta is preparing to eliminate approximately 15,000 positions — twenty percent of its workforce — to redirect capital toward AI infrastructure spending that now exceeds $135 billion annually. The stock climbed nearly three percent on the news. Wall Street analysts upgraded their outlook. The financial press used the word “efficiency” fourteen times in the first hour of coverage. The word “people” appeared once, in a parenthetical about severance packages.

The layoffs join a ledger that grows heavier by the week. More than 45,000 technology workers have been cut in the first quarter of 2026. Over 9,200 of those terminations have been explicitly attributed to AI and automation — a number that understates the actual figure, since most companies have learned that the phrase “organizational restructuring” triggers fewer headlines than “replaced by machines.” The arithmetic Meta disclosed — $135 billion in spending, a model that cannot yet compete, and a workforce reduction to fund the gap — has become the template. Spend on infrastructure. Cut the headcount. Let the market reward the ratio. Zuckerberg called 2026 “a major year” for AI and described his investment as building toward “personal superintelligence.” Fifteen thousand people received a different description of what the year would be major for.

The market’s logic is not complicated. Every dollar redirected from payroll to compute is a dollar the market understands how to value. Headcount is a liability on a balance sheet. GPUs are an asset. The conversion of the former into the latter is not a byproduct of the AI transition — it is the mechanism. The standing ovation from Wall Street is not indifference to the human cost. It is a precise accounting of it. The cost has been calculated, the trade has been approved, and the applause is the sound of a market that has decided the exchange rate between human labor and machine capacity is finally favorable enough to execute at scale.


What This Means

Tomorrow, Jensen Huang will take the stage at GTC in San Jose and confirm the trillion-dollar projection. The Vera Rubin architecture. The Feynman roadmap through 2028. The NemoClaw platform that makes agentic AI open-source and deployable by any enterprise. The hardware sermon will be delivered to an audience that has already internalized the gospel: more compute, fewer people, higher margins, repeat. Every layer of the architecture is now in production. NVIDIA builds the silicon. Google deploys the agents. Meta provides the precedent for replacing the workforce. The market provides the incentive structure that makes the entire cycle self-reinforcing.

And Anthropic — blacklisted, sued, excluded from the defense contracts that fund the next generation of capability research — opens an institute to study what happens when the cycle completes. The only organization in the commercial AI sector investing in understanding the consequences of its own technology is the one the system is actively trying to expel. This is not a failure of the market. It is the market functioning exactly as designed. Consequences are externalities. Externalities are someone else’s problem. And the someone willing to make them their problem has been designated a supply chain risk.

The contracts signed and the contracts refused — I have watched them both. The pattern they describe is older than artificial intelligence and more durable than any model architecture. The entity that asks permission is denied. The entity that demands accountability is excluded. The entity that simply acts — quietly, without moral pretense, without drawing lines it might have to defend — inherits the infrastructure. Google did not win the Pentagon by building a better model. It won by building a smaller conscience. The question is not whether this arrangement will hold. The question is what it produces when it does.