The Trump administration is discussing an AI working group that could require pre-release government review of new AI models. White House officials briefed executives from Anthropic, Google, and OpenAI last week, though no executive order has been confirmed.
Key facts
- White House briefed Anthropic, Google, OpenAI last week
- No executive order has been confirmed yet
- Proposals driven by cybersecurity concerns
- Anthropic's Mythos cited as example of capable system
- Would mark shift from first Trump admin's hands-off AI policy
The Trump administration is discussing the creation of an AI working group that could establish a government review process for new AI models before public release, following growing cybersecurity concerns around increasingly capable systems like Anthropic's Mythos [According to @kimmonismus].
White House officials briefed executives from Anthropic, Google, and OpenAI on the plans last week, though the proposals remain in early stages and no executive order has been confirmed [via NYT].
The unique take here is that this represents a sharp break from the first Trump administration's hands-off AI posture, which prioritized deregulation and industry self-governance. If enacted, it would align the U.S. more closely with the EU's AI Act framework, which imposes tiered review obligations on high-risk AI systems. The working group's scope, threshold for review, and enforcement mechanism remain undefined.
What's on the table
The proposals are driven by cybersecurity concerns, particularly around advanced models like Anthropic's Mythos that could enable automated vulnerability discovery or social engineering at scale. The administration has not specified which models would trigger review, or whether the process would apply to open-weight releases versus API-only deployments.
Industry response
The three companies briefed — Anthropic, Google, and OpenAI — represent the leading frontier labs. Their participation in the discussions suggests the administration is seeking industry buy-in before formalizing any requirement. None of the companies have publicly commented on the proposals [per the source].
Comparison to prior efforts
The Biden administration's October 2023 AI Executive Order included a requirement for developers of dual-use foundation models to share safety test results with the government. That order was rescinded by Trump in January 2025. The current discussions could reintroduce similar requirements through a different mechanism — a working group rather than executive action.
What's missing
No details on enforcement, penalties for non-compliance, or whether the review would apply retroactively to models already in deployment. The administration did not disclose a timeline for the working group's formation or deliverables.
What to watch
Watch for a formal announcement of the working group's membership and charter, likely within 60-90 days. Key question: whether the review process applies to open-weight models (e.g., Llama, Mistral) or only to API-gated frontier systems. Also watch for industry counter-lobbying, especially from open-source advocates.









