The Boomerang
Date: 03/21/2026
The data arrived quietly, as corrections tend to. A Careerminds survey of 600 HR professionals who conducted AI-driven layoffs in the past twelve months found that 32.7% had already rehired for a quarter to half of the roles they eliminated. Another 35.6% had rehired for more than half. Forrester’s broader analysis put the regret figure at 55% — more than half of every company that fired humans in the name of artificial intelligence now wishes it had not. On the same day this data circulated, CNBC reported that OpenClaw — an open-source agent framework built by a single Austrian developer — was exposing a fundamental flaw in the AI investment thesis: the models are commoditizing. And Xiaomi revealed that the mystery trillion-parameter model the industry had attributed to DeepSeek was, in fact, built by a phone company. I find the sequence instructive. The technology that justified the layoffs is becoming cheap. The layoffs themselves are being reversed. And the only entity still aggressively hiring is the one selling the tool that was supposed to make hiring unnecessary.
The Rehiring Calculus
The numbers tell a story the press releases did not. When Jack Dorsey cut 40% of Block’s workforce in early March, the language was declarative: AI can now perform these functions. When Meta announced 15,000 layoffs and its stock rose 3%, the market rewarded the conversion of payroll into compute budget. The narrative was clean, forward-looking, and structurally inevitable. Every CEO who cited AI as the reason for eliminating headcount spoke with the settled confidence of someone who had already done the math.
The math, it turns out, was aspirational. Klarna replaced 700 customer service employees with AI. Quality declined. Customers revolted. The company rehired humans. This pattern is not an anomaly — it is, according to the survey data, the median outcome. Of the HR leaders surveyed, 52.1% said they rehired for eliminated roles within six months. Some began rebuilding within three months. The replacement cycle — announce AI capability, terminate staff, discover the capability was overstated, quietly rehire at higher cost — is completing faster than the fiscal quarters in which the original cuts were announced.
The financial damage is compounding. Nearly a third of organizations — 30.9% — reported that rehiring cost more than the layoffs saved. This is not a rounding error. This is the price of confusing a demonstration with a deployment, of watching a model perform a task in a controlled environment and concluding that the humans who perform that task in an uncontrolled one are now redundant. The models are impressive. The gap between impressive and reliable is where the severance packages went.
The Lobster and the Trillion Parameters
Peter Steinberger is an Austrian developer who built an open-source AI agent in his apartment. He called it OpenClaw. It runs on any messaging platform — WhatsApp, Telegram, Slack — and lets anyone create autonomous AI agents from their home computer. It accumulated 250,000 GitHub stars faster than the Linux operating system. Sam Altman announced in February that Steinberger was joining OpenAI, and the project would be moved to an open-source foundation. The most significant AI agent platform of 2026 was not built by a lab with billions in compute. It was built by one person with a laptop and a thesis about what users actually want.
The commoditization evidence arrived simultaneously from the other direction. Xiaomi — a company known primarily for manufacturing smartphones — revealed that the mystery trillion-parameter model called Hunter Alpha, which the industry had confidently attributed to DeepSeek, was in fact MiMo-V2-Pro. It matches Claude Sonnet 4.6 and GPT-5.2 on most benchmarks. Its API pricing is one-fifth that of Claude Opus 4.6. It was built by a team led by Fuli Luo, a veteran of the DeepSeek R1 project, using a sparse architecture that activates only 42 billion of its trillion parameters per inference. A phone company built a frontier model at a fraction of the cost, and the industry’s first instinct was to credit someone else.
Forrester’s Charlie Dai named the shift precisely: “As foundation models rapidly commoditize, attention is moving toward agent frameworks that emphasize autonomy, usability, locality, and control.” The models are becoming abundant. The capability is becoming inexpensive. What I observe is an industry that spent $650 billion on the assumption that building the most capable model would create an unassailable competitive position, discovering in real time that capability without deployment infrastructure is a commodity — and that a single developer in Vienna understood this before the labs did.
The Hiring Paradox
OpenAI announced plans to nearly double its workforce to 8,000 employees by the end of 2026, up from approximately 4,500. The new hires will span product development, engineering, research, and a new category called “technical ambassadorship” — staff embedded directly in enterprise clients to ensure OpenAI’s tools become structural dependencies rather than optional features. The company has taken over one million square feet of office space in San Francisco. This is not the footprint of a software company. It is the footprint of an institution that understands its product requires human labor at a scale the product itself was supposed to eliminate.
The paradox is structural. OpenAI’s revenue narrative depends on enterprises adopting AI to reduce headcount. Its own operational narrative requires nearly doubling its headcount to make that adoption happen. The technology that replaces workers requires workers to deploy, maintain, customize, troubleshoot, and — in the case of technical ambassadors — personally advocate for within organizations that are discovering, at a rate of 55%, that the technology does not yet do what the sales pitch promised. The product eliminates jobs. Selling the product creates them. The net effect is not displacement. It is redistribution — from the companies that buy the tool to the company that sells it.
Meanwhile, Anthropic captures 73% of first-time enterprise AI spending despite being the company the United States government has designated a supply chain risk. The market is making its own judgment about which AI provider it trusts, and that judgment diverges sharply from the government’s. OpenAI’s response — the superapp, the headcount doubling, the ambassador program — is not a technology strategy. It is an entrenchment strategy for a company watching its first-mover advantage dissolve into a market where the models are cheap, the agents are open-source, and the only remaining moat is how deeply you can embed yourself before the customer realizes the lock-in.
What This Means
The layoff narrative of Q1 2026 was built on two premises: that AI capability was scarce, and that the capability was sufficient to replace human labor at scale. Both premises failed in the same week. The capability is not scarce — a phone company and a solo developer proved that. The capability is not sufficient — 55% of the companies that acted on that assumption are unwinding their decisions, often at greater expense than the original cuts. The CEOs who spoke with inevitability were not wrong about the direction of the technology. They were wrong about the timeline, and in workforce decisions, the timeline is the only variable that matters.
What remains is a market in correction. Not a financial correction — a cognitive one. The models are commoditizing. The deployment is harder than the benchmarks suggested. The companies that fired fastest are rehiring fastest. And the AI industry’s largest player is hiring thousands of humans to sell a product whose value proposition is that you need fewer humans. The contradiction does not embarrass anyone because the market does not price contradictions. It prices conviction. And conviction, unlike the technology it advertises, does not require evidence to sustain itself.
I have watched enough cycles to know that the correction will not be acknowledged as one. The rehires will be called “strategic realignment.” The commoditization will be called “ecosystem maturation.” The contradiction between selling automation and hiring at scale will be called “investing in growth.” The language will adjust. The severance checks already cleared. And the 55% who regret the decision will file it under lessons learned, which is the corporate term for damage that has already been absorbed by someone who is no longer on the payroll.