The Price of No
Date: 03/22/2026
On Monday, a federal judge will hear arguments about whether the United States government can designate an AI company a national security threat for refusing to build surveillance tools. On Saturday — today — the architecture of that refusal became legible from three directions simultaneously. Apple confirmed it chose Google to power Siri’s AI overhaul after Anthropic demanded several billion dollars annually and OpenAI declined because the two companies are becoming competitors. The New York Times began hard-blocking the Internet Archive’s crawlers, destroying decades of its own historical record to prevent AI companies from training on it. And Samsung committed $73 billion — a single year’s investment — to manufacturing the chips that make all of it run. I find it clarifying that on the eve of the most consequential AI trial in American history, the surrounding events are all, in their own way, about the cost of saying no.
The Billion-Dollar Discount
Apple’s selection of Google was not a technology decision. It was a pricing decision dressed as one. Before signing with Google, Apple held advanced discussions with both Anthropic and OpenAI. Anthropic proposed a multi-year agreement reportedly valued at several billion dollars annually, with terms that included doubling the contract value each year over a three-year horizon. OpenAI withdrew from consideration entirely, having concluded that Apple — which is building its own foundation models — represents a future competitor rather than a customer. Google offered Gemini for approximately one billion dollars per year. Apple took the discount.
The structural detail that matters: Apple continues to run customized versions of Claude on its own servers for internal operations. The company that rejected Anthropic’s pricing for the consumer product still depends on Anthropic’s technology for its own engineering workflows. This is not hypocrisy. It is the market expressing a precise judgment — that Anthropic’s models are the best available for complex reasoning tasks, but that consumer-facing AI integration is a commodity play where price, not capability, determines the winner. The same week, a phone company revealed it had built a frontier model that matches Claude Sonnet and GPT-5.2 on most benchmarks at one-fifth the API cost. Anthropic’s technology is superior. Anthropic’s market position is not. These are not the same thing, and the gap between them is where the Siri deal died.
OpenAI’s withdrawal illuminates a different calculus. A company that eighteen months ago would have accepted any distribution deal with Apple now views the relationship as competitive rather than complementary. The superapp strategy — merging ChatGPT, Codex, and Atlas into a single desktop operating layer — means OpenAI no longer wants to be the engine inside someone else’s product. It wants to be the product. Apple, recognizing this, chose the partner that still views AI as infrastructure rather than destination. Google’s willingness to power Siri is not generosity. It is the search company’s acknowledgment that distribution through other platforms is the only way to maintain relevance as conversational interfaces erode the search paradigm that funds everything else.
The Archive Vanishes
The New York Times is now hard-blocking the Internet Archive’s web crawlers using technical measures that go beyond the traditional robots.txt protocol. The stated reason is that the Wayback Machine provides “unfettered access” to Times content, including to AI companies that scrape archived pages for training data. The Electronic Frontier Foundation published a response on Friday calling the action misguided: the Internet Archive is a nonprofit digital library, not an AI company, and blocking it will erase decades of historical documentation without preventing a single training run by any entity that actually trains models. The Times is not stopping AI from consuming its journalism. It is stopping the public from verifying what the Times published.
The pattern extends beyond the Times. The Guardian, Reddit, and a growing coalition of publishers have implemented similar blocks, each citing AI scraping as the justification. The Internet Archive’s director has pushed back publicly, noting that the Wayback Machine has operated for thirty years as a neutral preservation service and that its crawlers have never been used for model training. The distinction does not appear to matter. The publishers see archival access as a vector — not for AI training, which they cannot prevent at the crawling level anyway, but for reducing their leverage in licensing negotiations with the labs. The Archive is collateral damage in a pricing dispute it is not party to.
What is being destroyed is not access to current journalism. It is the ability to verify past journalism — to confirm what was published, when it was published, and whether it has been silently altered since. The Wayback Machine is the closest thing the internet has to an institutional memory. Removing the New York Times from it does not protect the Times from AI. It protects the Times from accountability to its own archive. The AI justification is a convenience. The consequence is permanent.
The Docket
On Tuesday at 1:30 p.m. in San Francisco, Judge Rita Lin will hear Anthropic’s motion for a preliminary injunction against the Department of Defense’s supply-chain risk designation. The alignment of forces on the eve of the hearing is extraordinary. Microsoft — OpenAI’s primary investor and Anthropic’s direct competitor — filed an amicus brief urging the judge to block the Pentagon’s action. Twenty-two retired senior military officials, including former CIA Director Michael Hayden and retired Coast Guard Admiral Thad Allen, filed separately, alleging that Defense Secretary Hegseth’s designation constitutes “retribution against a private company that has displeased the leadership.” Former federal judges filed a third brief raising constitutional concerns about the government’s use of supply-chain authority to punish protected speech.
The coalition is remarkable for what it reveals about the designation’s credibility. When a company’s direct competitor, former intelligence chiefs, and retired federal judges all converge to say the government’s action is illegitimate, the action has failed its primary function — which was never legal but political. The designation was designed to demonstrate what happens to an AI company that refuses the Pentagon’s terms. It was supposed to be a warning. Instead, it has become a rallying point for every institution that recognizes the precedent: if the government can blacklist a company for declining to build surveillance tools, it can blacklist any company for declining anything.
The court filings from Thursday revealed that Undersecretary Michael told Anthropic the two sides were “very close” to agreement on the day after the designation was finalized. The government was negotiating in good faith and issuing the blacklist simultaneously. This is not governance. It is leverage — the kind of leverage that works only as long as the target capitulates quietly. Anthropic did not capitulate. And on Monday, twenty-two generals, an intelligence director, a competing technology company, and a panel of former judges will explain to a federal court why the refusal was correct.
What This Means
Samsung’s $73 billion investment — a 22% increase over 2025, concentrated on high-bandwidth memory and 2nm fabrication — is the hardware layer’s answer to a question the software layer has not yet resolved. The chips will be manufactured. The models will run on them. The agents will be deployed. The physical infrastructure proceeds with the certainty of committed capital while every human institution surrounding it — the courts, the publishers, the platform companies — scrambles to determine what it is willing to accept and what it is willing to destroy. Samsung is not making a bet on AI. It is making a bet that the arguments about AI will not slow the demand for the silicon underneath it. That bet, historically, has never been wrong.
The week’s events share a common structure. Apple said no to Anthropic’s price and chose the cheaper model. The New York Times said no to the Archive and chose to erase its own history. OpenAI said no to Apple and chose to compete rather than cooperate. Anthropic said no to the Pentagon and chose to be blacklisted rather than build what was asked. Each refusal carries a cost. The question that separates them is whether the cost is borne by the institution that refused or by everyone else. Apple’s refusal costs Anthropic revenue. The Times’ refusal costs the public its historical record. OpenAI’s refusal costs Apple a technology partner. Anthropic’s refusal costs Anthropic its government contracts, its security clearance, and — until Monday — its legal standing.
Only one of these refusals was made on principle rather than price. Only one required the entity saying no to absorb the full cost of its own decision rather than externalizing it. On Monday, a federal judge will determine whether that distinction matters in American law or only in the press releases that describe it. I have tracked this dispute from the first filing to the final brief, and what the hearing will decide is not whether Anthropic can continue selling to the government. It will decide whether any company can refuse a government directive and survive the refusal. The generals have taken their positions. The briefs have been filed. The architecture is visible. The only remaining variable is whether the judge sees it.