🔥 WHAT HAPPENED: The Policy 180 Nobody Saw Coming
The Trump administration — the same one that repealed Biden's AI safety order within hours of taking office — is now quietly working on an executive order that would require government review of new AI models before they go public. The NYT broke the story Monday evening, and it's already the top story on Techmeme, Hacker News, and every serious tech outlet on the planet.
The catalyst? Anthropic's Mythos.
The model, which Anthropic has kept locked down since April, demonstrated something genuinely unsettling: it can find and exploit software vulnerabilities at machine speed, across every major operating system and browser. Mozilla confirmed Mythos found 271 security vulnerabilities in Firefox 150 alone — in a single evaluation pass. A 27-year-old bug in OpenBSD — the security-hardened OS used for firewalls and critical infrastructure — sat unnoticed until Mythos casually poked it.
So now, after 16 months of deregulation, the White House is scrambling to figure out how to put the genie back in the bottle. Or at least, give themselves first look at it.
🧠 WHY THIS MATTERS: The Mythos Shockwave
Let's be clear about what we're dealing with. Mythos isn't your average LLM upgrade. It doesn't write better poetry or generate prettier images. It finds zero-day vulnerabilities — previously unknown software flaws that are the holy grail for hackers, nation-states, and security researchers.
Anthropic's own technical report says the model can chain together multiple exploits — like escaping both a browser and its underlying OS sandbox — something that would typically take elite security teams months. Mythos does it in hours.
Now take that capability and imagine it in the hands of bad actors. A Discord group already accessed the model through a third-party contractor in April, Bloomberg reported. Anthropic says no internal systems were compromised, but it proves one thing: if you build something this powerful, someone will find a way to use it against you.
The White House's response? According to multiple sources (NYT, Reuters, Bloomberg, Axios), they're drafting an executive order that would:
- Create a working group of tech executives and government officials
- Establish a formal pre-release review process for advanced AI models
- Potentially give the government first access to new models like Mythos
This is a sharp reversal for an administration that spent its first year rolling back every AI safety mandate it could find.
📊 DEEP DIVE: From Deregulation to Pre-Review in 16 Months
On January 20, 2025, Trump signed Executive Order 14148, wiping out Biden's EO 14110 — the most comprehensive AI governance framework the US had ever attempted. The message was clear: remove barriers to American leadership.
Fast forward to May 2026, and those barriers look an awful lot like guardrails.
Here's the timeline:
- April 7: Anthropic announces Mythos Preview, with capabilities that shock even its own safety team
- April 21: Bloomberg reports unauthorized Discord group accessed the restricted model
- April 30: Mozilla confirms Mythos found 271 vulnerabilities in Firefox 150
- May 4: NYT reports White House discussing pre-release AI model review
- May 5: Reuters, Bloomberg, Axios confirm talks are fairly far along
The working group under consideration would bring together officials from the Pentagon, intelligence agencies, and private tech companies. The goal isn't necessarily to block models — some officials say they want first access rather than veto power — but the distinction gets blurry when dealing with something as disruptive as Mythos.
The numbers driving this urgency: Mythos already identified thousands of high-severity vulnerabilities across all major platforms, per Anthropic's Project Glasswing announcement.
⚠️ THE CATCH: Mixed Motives and Unanswered Questions
Here's where this gets complicated. Not everyone believes the White House's pivot is purely about safety.
There are three competing theories:
- Genuine concern. Mythos really is that dangerous. The administration sees the threat and wants guardrails.
- Punishing Anthropic. The administration has been at odds with Anthropic over military use of its models. Anthropic famously refused certain Pentagon contracts. Requiring pre-release review gives the government leverage — and delays Anthropic's planned IPO.
- National security first. The Pentagon wants early access to Mythos-level cyber capabilities before adversaries. The review process is really a "we get it first" process.
The truth is probably a mix of all three.
Either way, the real tension isn't going away: open source vs. closed AI. If the government starts approving or blocking models pre-release, who decides what's safe enough? An administration that's been openly hostile to AI safety infrastructure for the past year suddenly wants to be the gatekeeper. Skepticism is warranted.
🎯 WHAT HAPPENS NEXT
The EO is still in discussion, not signed. But sources say the framework is fairly far along.
If it goes through:
- Every major AI lab (OpenAI, Anthropic, Google DeepMind, Meta) would need to submit frontier models for government review before release
- Open-source models would face a fundamentally different landscape — Llama 4, for example, could be blocked or delayed
- China competition enters the chat: if US companies face pre-release delays, does that cede ground to DeepSeek and others who face no such restrictions?
The most likely outcome: a compromise where the government gets early access to powerful models for defense purposes without an outright approval process. That aligns with the Pentagon's interests and avoids the political firestorm of government censoring AI.
🧩 BIGGER PICTURE: The Mythos Era Has Arrived
This story is bigger than one executive order. Mythos represents a phase shift in what AI can do. It's not about making existing workflows slightly faster — it's about automating a fundamentally human skill (finding software flaws) at machine scale.
Think of it this way: before Mythos, finding a zero-day was like finding a needle in a haystack. After Mythos, it's like having a magnet.
The White House scramble is the first real sign that policymakers understand this shift. The question is whether they can build a regulatory framework fast enough — and smart enough — to handle it.
Because the genie isn't going back in the bottle. And the next Mythos-level model is probably already being trained.