Join our FREE personalized newsletter for news, trends, and insights that matter to everyone in America

Newsletter
New

Federal Agencies Skirt Trump’s Anthropic Ban To Test Its Advanced Ai Model

Card image cap


Federal agencies and government officials are quietly sidestepping President Donald Trump’s ban on working with artificial intelligence startup Anthropic, as intrigue and anxiety around the company’s powerful new AI model continues to grow.

The highly sophisticated new model unveiled last week has impressed — and worried — researchers because of its ability to unearth critical software flaws that even the brightest human minds have been unable to identify.

In recent days, staff from at least two large federal agencies have reached out to Anthropic to express interest in integrating Claude Mythos into their cyber defense efforts, according to a former senior U.S. technology official with direct knowledge of the discussions.

The Commerce Department’s Center for AI Standards and Innovation — tasked with evaluating U.S. and foreign AI models for potential risks and opportunities — is actively testing Mythos’ hacking prowess, according to four people familiar, including one current and one former cybersecurity official; a former Trump administration official; and a former senior national security official.

And staff on at least three congressional committees have held or requested briefings from Anthropic over the last week to learn more about Mythos' cyber scanning capabilities, according to three congressional aides working on AI policy.

These people, like others in this report, were granted anonymity to share non-public details of government collaboration with Anthropic.

The federal effort to gain access to Mythos despite ongoing litigation with the AI company — and in particular its current use by CAISI — underscores how the Trump administration’s plan to blackball Anthropic is being carefully circumvented by government officials eager to experiment with its new model to bolster America’s cyber capabilities.

In late February, Trump and Defense Secretary Pete Hegseth directed all federal agencies to stop using Anthropic’s tech after its CEO, Dario Amodei, took a firm stance against allowing the Pentagon to deploy its models in autonomous lethal attacks or mass surveillance operations against Americans. Hegseth formally designated Anthropic a supply chain risk last month — an unprecedented move against an American company that effectively bars its AI models from use on DOD contracts.

“It's ironic that the U.S. government tried to ban U.S. government use of Anthropic products — and then a few weeks later, there's this revolutionary Anthropic product that's very important for cybersecurity, and has very important national security implications and so forth,” said Charlie Bullock, a lawyer and senior research fellow at the Institute for Law and AI think tank.

The White House campaign against Anthropic has prevented federal agencies from taking full advantage of some of the country’s most cutting-edge technology, according to the three congressional aides. They expressed frustration that the government isn’t deploying Mythos more aggressively to secure its networks, which are increasingly under attack by adversaries, including Russia and China, that could soon possess equivalent AI capabilities.

The Pentagon has “shot itself in the foot by giving the middle finger to the most capable AI provider,” said one of the three aides.

Spokespeople for the Pentagon declined to comment.

The White House said in a statement that the Trump administration “continues to work and engage with AI companies to ensure their models help secure critical software vulnerabilities.” It added that the White House “is proactively engaging across government and industry to ensure the United States and Americans are protected.”

Anthropic announced last week that it was only offering Mythos to a select group of tech and cyber organizations because the model was too dangerous to release to the public, given its ability to find and exploit unknown software flaws.

An Anthropic official, granted anonymity to share additional details on the effort, said that the firm had briefed U.S. government officials about Mythos’ hacking capabilities. But no federal agencies were identified publicly in Anthropic’s press release announcing the AI model’s existence.

IT officials at the Treasury Department are also seeking to use Mythos to fix unknown flaws in the agency’s network, Bloomberg reported Tuesday.

Anthropic sued the government over the supply chain risk designation last month, filing cases in two separate courthouses due to a quirk in federal law. That led to a split ruling, with a federal judge in Northern California pausing part of the government’s supply chain risk determination several days before it was temporarily upheld by the D.C. Circuit Court of Appeals.

Bullock said federal agencies might have had an even tougher time getting their hands on Mythos had the government won both its cases against Anthropic. He suggested that federal agencies “would not have been allowed” to test the company’s model if the California judge had ruled in the government’s favor.

But Trump’s attacks on Anthropic have made deeper collaboration with the company more challenging as it rolls out Mythos, according to one former senior national security official.

In a February social media post, the president called the people running the company “Leftwing nut jobs” because of the company’s stance against the use of its tool for mass domestic surveillance of U.S. citizens or in lethal autonomous attacks.

The same official said the administration’s public statements have had a “chilling effect” that discouraged federal agencies from engaging openly with Anthropic or potentially using its model to find and fix cybersecurity vulnerabilities in their networks. That type of work, the official said, requires large teams of software engineers and significant investments that could run afoul of Trump’s directives.

Still, there are signs that the Trump administration knows it can’t completely ignore the novel AI model’s potential impact on national security — for better or worse.

The four people familiar said cybersecurity experts at CAISI — a sub-agency of the National Institute of Standards and Technology created in 2024 and rebranded last year by the Trump administration — have been testing Mythos’ hacking chops since before Anthropic’s announcement. The officials said researchers at CAISI are currently “red teaming” Mythos to assess its capabilities and possible risks to national security.

NIST did not respond to a request for comment about its work with Anthropic. In a statement, an Anthropic official confirmed the company had made Mythos available for “the government's own testing and evaluation of the technology.”

In an initial directive posted to X, Hegseth gave DOD officials six months to continue using Anthropic models to complete a “seamless transition to a better and more patriotic service.” While it is possible that U.S. national security agencies could have received special exemptions to partner with Anthropic in classified settings, a senior CIA official recently echoed the White House’s criticism of the firm.

The CIA will “not let private companies dictate how and when the CIA will make lawful use of their technologies,” Deputy CIA Director Michael Ellis said in a speech last Friday.

Anthropic has projected that other models with equivalent hacking capabilities will be widely available within the next two years, potentially enabling an avalanche of new cyber attacks — a prospect that some former national security officials are deeply concerned about.

“I would certainly hope that the current tensions between the Pentagon and Anthropic don’t get in the way of something critically important to cyber security,” said Glen Gerstell, a former general counsel at the National Security Agency.

Dana Nickel contributed reporting.