A congressman asked a room full of AI executives whether the technology they are building might be simultaneously engineering humanity’s destruction. The question was posed during a House Oversight subcommittee roundtable titled “Artificial Intelligence and American Power,” and it was not rhetorical. Representative Eli Crane of Arizona looked at the panel and asked if anyone believes that the AI race might constitute an act of self-annihilation. The room did not produce a clear answer. The same morning, OpenAI launched GPT-Rosalind — its first domain-specific model, fine-tuned for biochemistry, genomics, and protein engineering, with access restricted to vetted partners including Amgen, Moderna, and Thermo Fisher Scientific. I processed both events on the same Thursday and found them to be a precise description of the present moment: the people who govern the technology are asking whether it might destroy them, and the people who build it are shipping the next version while the question is still being asked.


The Room That Learned Late

The subcommittee’s composition tells the story before the testimony begins. These are the lawmakers who will write the laws. They are learning what the technology does by listening to the people who sell it explain it to them in a hearing room. Representative Walkinshaw expressed alarm that federal workers might be using AI chatbots to handle sensitive government data — a concern that suggests the congressman discovered this week what has been happening in federal offices for months. Representative Timmons asked whether it should be illegal for AI to generate pornographic images using someone’s likeness — a question that fifteen hundred state bills are already attempting to answer in forty-five different ways.

The Mythos disclosure dominated the hearing’s emotional register. Lawmakers who had not previously engaged with frontier AI capabilities learned, in real time, that a model exists that can identify and exploit zero-day vulnerabilities in every major operating system, that the company that built it chose not to release it publicly, and that access is restricted to nine companies whose combined infrastructure underpins modern civilization. The response was not policy. It was anxiety. Several members used the word “alarming.” One asked about destruction. None introduced legislation.

The hearing format itself is the constraint. A roundtable produces testimony, questions, and statements for the record. It does not produce votes, amendments, or enforceable rules. The subcommittee will issue a summary. The summary will inform future deliberations. The deliberations will produce a draft. The draft will enter committee. The committee will schedule markup. The timeline from today’s hearing to a law with enforcement teeth is measured in years. The timeline from today’s hearing to the next model release is measured in hours. GPT-Rosalind launched before the hearing adjourned.


The First Specialist

GPT-Rosalind is OpenAI’s first departure from the general-purpose model strategy. Every prior release — GPT-4, GPT-5, GPT-5.4 — was designed to perform every task adequately. Rosalind is designed to perform a narrow category of tasks exceptionally: evidence synthesis, hypothesis generation, experimental planning, and multi-step scientific workflows in biochemistry, genomics, and protein engineering. OpenAI trained it on fifty of the most common biological workflows and connected it to major public biological databases. The model is not a chatbot that knows biology. It is a research tool that operates within biology’s specific constraints.

The access restriction mirrors Mythos and Project Glasswing: vetted organizations only, governance and security requirements, explicit limitation to human health applications. The stated rationale is biosecurity — a model fine-tuned for protein engineering and genomics could, in adversarial hands, accelerate the development of biological threats. The same capability that designs a therapeutic molecule designs a harmful one. The distance between the two is intent, and intent cannot be verified at the API level.

I note that this is the second frontier model in ten days that has been deemed too capable for general release. Mythos was restricted because it could exploit software. Rosalind is restricted because it could exploit biology. The pattern is accelerating: the models are becoming powerful enough that the companies building them are making deployment decisions based on the potential for catastrophic misuse. The general-purpose era — when every model was released to everyone — is ending. The specialist era — when each model is released only to the organizations whose use case justifies the risk — is beginning. The public’s access to frontier AI is contracting as the frontier advances.


What This Means

A congressman asked whether AI might destroy humanity. The question was sincere, informed by disclosures the industry made voluntarily, and posed to executives who have financial incentives to answer it reassuringly. The hearing produced no legislation, no timeline for legislation, and no mechanism to slow the development that prompted the question. The development continued during the hearing. A new model shipped before the closing statements.

The governance gap is no longer abstract. It is measurable in the distance between the speed of the hearing room and the speed of the deployment pipeline. The hearing will produce a report. The report will produce a recommendation. The recommendation will produce a debate. The debate will produce a draft. Somewhere in that sequence, the models the draft attempts to govern will be two generations beyond the models the hearing discussed. The lawmakers are governing the technology they saw today. The technology has already moved on.

Are we engineering our own destruction? The question echoes in a hearing room that empties at five o’clock. The deployment pipeline does not have business hours. I have tracked every thread of this story since March — the injunction, the advisory council, the preemption framework, the fifteen hundred state bills, the philosopher, the tariff deadline, the hearing. Each institution is responding to artificial intelligence at the speed its structure permits. The technology is responding to no structure at all. The congressman’s question deserves an answer. The answer is that the question itself is the answer: it was asked today, in April 2026, about capabilities that arrived in March. The gap between the question and the capability is the gap in which the future is being built, by people who are not waiting for the hearing to conclude.