The Pentagon and Anthropic, a leading artificial intelligence company, are headed for a confrontation after Anthropic’s CEO refused the Defense Department’s ultimatum to loosen safety restrictions on its AI model or face being blacklisted from military work.
The dispute could affect hundreds of millions in contracts and access to some of the most advanced AI tools. Here’s what to know.
Why they’re at odds
Anthropic CEO Dario Amodei has repeatedly said the company’s AI model, Claude, must not be used for domestic mass surveillance or to power fully autonomous weapons that can kill without human approval. He has described those applications as “bright red lines” and “entirely illegitimate.”
The Pentagon says it does not plan to use Anthropic’s tools for mass surveillance or autonomous weapons. But officials insist contractors shouldn’t unilaterally decide permissible uses and argue that companies must allow the U.S. government to employ their technology “for all lawful purposes.” A senior Pentagon official told NPR that assessing legality is the Pentagon’s responsibility as the end user.
Amodei’s response and recent exchanges
On Thursday, Amodei said Anthropic could not accept the Pentagon’s latest contract changes. He emphasized the company supports using AI to defend democracies and defeat autocratic adversaries, and said Anthropic has not tried to block specific military operations. But he argued a narrow set of uses — namely domestic mass surveillance and fully autonomous weapons — are outside what today’s technology can safely handle and should not be permitted.
Relations have soured. At a recent meeting between Defense Secretary Pete Hegseth and Amodei, Hegseth reportedly threatened consequences if Anthropic did not comply. One person familiar with the talks said Hegseth suggested canceling Anthropic’s roughly $200 million Pentagon contract. A Pentagon official said potential repercussions could include forcing Anthropic to let the government use its model against the company’s wishes and effectively blacklisting Anthropic from military contracts.
Anthropic replied that it could not in good conscience accept the Pentagon’s demand but expressed hope the Department would reconsider given the value Anthropic’s technology provides to the armed forces.
A hard deadline
Pentagon spokesman Sean Parnell posted on X that Anthropic had until 5:01 p.m. ET on Friday to decide before the Pentagon would act. Parnell framed the demand as necessary to prevent a narrative that the Department was seeking to use AI for surveillance or autonomous weapons — which he said the Department does not support.
Anthropic said the Pentagon sent new contract language overnight that, in Anthropic’s view, “made virtually no progress on preventing Claude’s use for mass surveillance of Americans or in fully autonomous weapons.” The company added that supposed compromises were paired with legal terms that would allow those safeguards to be disregarded and said those narrow protections have been central to months of negotiations. Anthropic said it remains willing to continue discussions and is committed to operational continuity for the Department and U.S. warfighters.
Possible Pentagon actions and legal questions
The Pentagon has warned it could designate Anthropic a “supply chain risk,” a label more commonly applied to foreign adversary technology such as Huawei. Geoffrey Gertz, a senior fellow at the Center for a New American Security, said that designation’s consequences are unclear: it could bar other Pentagon contractors from using Anthropic’s tools in defense work or prohibit use of Anthropic’s tools more broadly — the latter would be particularly damaging.
Officials have also threatened to invoke the Defense Production Act (DPA) to compel Anthropic to remove guardrails. Gertz called that an extraordinary and rare step; the DPA is typically reserved for true emergency crises to give the government control over certain commercial sectors. Using it to force a private company to change product safeguards would be an unusual application.
Gertz noted the two threats are somewhat contradictory: the Department is simultaneously suggesting Anthropic is too risky to remain in systems while also implying the company is so essential it might need to be compelled to stay. If the Pentagon merely cancels the contract, the dispute might end there. But if the Department tries to force guardrail removal or imposes a sweeping supply-chain-risk designation, Anthropic would likely fight back legally.
Wider implications
The disputed contract is worth as much as $200 million, a relatively small slice of what the company reports as $14 billion in revenue. The Pentagon has similar arrangements with other AI companies including Google, OpenAI and xAI, but Anthropic was the first cleared for classified use after being judged a secure and capable model for sensitive applications.
How the stand-off resolves could shape how private AI companies set and enforce boundaries on military uses of their technology, how much leverage the U.S. government can exert over safety features, and whether legal and policy battles follow. If the Department escalates its approach, experts expect litigation and broader debate over industry limits on government use of AI.
NPR’s Bobby Allyn contributed reporting to the original story.