Join our FREE personalized newsletter for news, trends, and insights that matter to everyone in America

Newsletter
New

White House Mulls Tighter Controls On Advanced Ai

Card image cap


The Trump administration is considering a slate of executive actions to address escalating security risks from advanced artificial intelligence models, according to seven tech industry representatives and policy advisers granted anonymity to discuss sensitive deliberations.

In recent talks with industry, the White House has floated the possibility of an executive order creating a vetting regime that would review the impact of frontier AI models. Those talks were confirmed by two industry representatives and an AI policy expert familiar with the discussions, who like others in this article were granted anonymity to describe sensitive and fast-shifting deliberations.

The AI policy expert and an industry representative said the vetting process could require AI companies to receive a green light from the government before releasing advanced models. The New York Times first reported that the White House was considering such a regime.

A White House spokesperson said any official policy announcement would come directly from President Donald Trump, and that discussion about potential executive orders was “speculation.”

Deliberations over an official pre-deployment vetting for advanced AI come as more companies agree to voluntarily turn over their models for government review. Earlier on Tuesday, the Trump administration inked agreements with top AI firms Microsoft, xAI and Google DeepMind that will allow it to evaluate their models for national security risks ahead of release.

The White House discussions come as the Trump administration grapples with wider public unease over AI, including industry spending on political races. New results from The POLITICO Poll, published earlier this month, found broad public skepticism over the technology.

AI vetting is just one of the executive actions now being contemplated by the White House, including new proposals to address AI’s security risks and limit the industry’s power to push back against the government’s security and policy demands.

According to four of the people, the administration is mulling a 16-page executive order that would prohibit the private sector from “interfering” with the government’s use of AI models. The order would also create more aggressive contracting and termination standards for federal vendors, according to four people familiar with the proposal.

Those provisions appear to be driven by the recent standoff between the Defense Department and the AI company Anthropic, which refused to let the military use its AI model Claude to surveil Americans or power autonomous weapons. Defense Secretary Pete Hegseth chafed at those restrictions, and in March he designated Anthropic a supply chain risk to national security — an unprecedented move that restricted the government’s ability to use Anthropic products.

The ongoing deliberations — which multiple people cautioned remain in flux — represent a significant shift in policy approach for the Trump administration. At the urging of laissez-faire venture capitalists like David Sacks and Marc Andreessen, the White House had previously taken a hands-off approach to AI industry regulation and oversight.

The apparent about-face is causing alarm in tech circles, with some industry representatives worrying that tighter government controls on AI will slow innovation.

“Nobody wants to see … a world where you have to get permission from the government to release the next version of an AI model,” said Daniel Castro, president of the Information Technology and Innovation Foundation, a think tank supported by Anthropic, Microsoft and other tech companies. “We’ve seen the speed of Silicon Valley, we’ve seen the speed of Washington, and they operate at very different paces.”

“To compete with China, we need to be moving quickly,” Castro warned.

Other parts of the contemplated AI executive order seem designed to address cybersecurity risks posed by frontier AI, particularly Anthropic’s new Mythos model. While Anthropic has not released Mythos publicly, early testing by governments and large institutions indicate that it can find and exploit software vulnerabilities in ways human hackers cannot.

According to two of the people familiar with the discussions, the 16-page executive order would create technical guidelines and best practices to secure open-weight models, which have public training parameters enabling users to adapt them to new tasks. The White House is also weighing whether to tap the intelligence community to help secure systems from cutting-edge AI models, according to three of the people familiar with the plans.

The threat posed by Mythos provoked concerns among top officials in the Trump administration. Concerned that the ongoing fight with Anthropic was preventing federal agencies from using Mythos to stress-test their systems, in recent weeks they have worked to lower the temperature between the White House and the AI company.

Two people familiar with ongoing discussions said the Trump administration is working to create a board to review the supply chain risk designation against Anthropic, but it was not immediately clear if this would be part of a proposed AI executive order.

Saif Khan, a former adviser on emerging technology in the Biden administration and a fellow at the Institute for Progress think tank, said he believes the emergence of Mythos “is changing the conversation around AI and national security in the White House.”

“Before that, I think there was dismissiveness. Now many folks are taking this quite seriously,” said Khan. “The pure, Silicon Valley venture-capital type of approach to AI policy just might be over in the Trump administration.”

Dana Nickel, John Sakellariadis, Jacob Wendler and John Hewitt Jones contributed to this report.