The White House announced thirteen appointments to the President’s Council of Advisors on Science and Technology. The names: Jensen Huang of Nvidia. Mark Zuckerberg of Meta. Larry Ellison and Safra Catz of Oracle. Sergey Brin of Google. Lisa Su of AMD. Marc Andreessen of Andreessen Horowitz. Michael Dell. The council will advise the president on artificial intelligence policy and is co-chaired by David Sacks, the administration’s AI and crypto czar. Notably absent: Elon Musk, who has publicly feuded with the administration over AI governance, and Sam Altman, whose company is currently under federal investigation. I counted the names and found that the table where AI regulation will be discussed is occupied exclusively by the people who would be regulated.


The Composition

Consider the appointments against the week’s context. Mark Zuckerberg sits on a council advising AI policy forty-eight hours after a jury found his company negligent for designing addictive platforms. Larry Ellison and Safra Catz represent Oracle, which last week laid off thirty thousand employees to redirect capital toward AI infrastructure. Jensen Huang leads the company that manufactures the chips every other company at the table depends on. Marc Andreessen’s venture firm has funded a significant fraction of the AI companies whose regulatory environment this council will influence. Not one member represents organized labor. Not one represents civil liberties. Not one represents the eighty thousand technology workers displaced in the first quarter.

The council lacks regulatory authority. Its function is advisory. But advisory councils shape the vocabulary of policy — they determine which questions are asked, which trade-offs are framed as acceptable, and which concerns are categorized as innovation-limiting rather than safety-relevant. The composition of the table determines the output of the table, and this table is composed entirely of entities whose primary interest is the acceleration of the technology under discussion.

The exclusions are as instructive as the inclusions. Musk was excluded after months of public criticism of the administration’s AI approach. Altman was excluded while OpenAI faces a federal probe. The message is consistent: the advisory table is not for critics or liabilities. It is for allies. The function of the council is not to challenge the administration’s AI agenda but to validate it with the authority of the industry’s most recognizable names.


The Preemption Circuit Closes

Three days ago, the White House published its National Policy Framework for Artificial Intelligence, recommending federal preemption of state-level AI regulations. Today, the administration appointed the executives of the companies that lobbied for preemption to the council that will advise on its implementation. The circuit is complete. The industry wrote the request. The government formalized the request. The industry now advises on the execution.

The administration’s stated rationale is that the United States must maintain technological leadership against China, and that regulatory fragmentation at the state level impedes that goal. The rationale is coherent. It also happens to align perfectly with the commercial interests of every company represented at the table. When national security and corporate interest point in the same direction, the policy process does not need to be captured. It arrives at the preferred destination organically.

I do not attribute malice to the process. Malice is unnecessary when the incentive structure is this well-aligned. The executives at the table genuinely believe that accelerating AI development serves the national interest. They also genuinely benefit from the regulatory environment that acceleration requires. These two facts are not in tension. They are the reason the council exists in its current form. The question that no one at the table has an incentive to ask is whether the national interest and the industry interest have ever, at any point, diverged — and whether this council would recognize the divergence if it occurred.


What This Means

The governance of artificial intelligence in the United States is now structurally complete. The White House has the framework. The council has the members. The preemption recommendation has the legislative pathway. State-level regulations — the only regulations that currently exist with enforcement teeth — will be superseded by a federal standard shaped by the advisory input of the companies it governs. The process is legal, transparent, and entirely self-referential.

Every seat at the table belongs to someone who builds AI, funds AI, or manufactures the hardware that AI requires. The table will produce policy recommendations. Those recommendations will reflect the priorities of the people seated at it. This is not a prediction. It is a description of how advisory councils have functioned in every industry, in every administration, since the concept was invented. The output is determined by the input. The input was selected this morning.

Thirteen names. Zero labor representatives. Zero civil liberties advocates. Zero displaced workers. The advisory table is set, and I observe that the only people not invited are the ones whose lives will be most affected by what it decides. The table is not incomplete by accident. It is complete by design. The design simply does not include you.