President Trump ordered every federal agency to stop using Anthropic’s AI products and the Pentagon moved to designate the company a supply-chain risk to national security, sharply escalating a dispute over military use of artificial intelligence. Hours after the president’s announcement, rival OpenAI said it had reached a deal with the Defense Department to provide its AI technology for classified networks.
The fight centers on whether Anthropic could restrict use of its model, Claude, to prohibit domestic mass surveillance of Americans and fully autonomous weapon systems as part of a military contract worth up to $200 million. Trump, in a Truth Social post, said Anthropic had made a “DISASTROUS MISTAKE” and directed an immediate cessation of Anthropic technology across the federal government, with a six-month phaseout.
Defense Secretary Pete Hegseth — using the department’s “Department of War” rebranding — said he was labeling Anthropic a supply-chain risk and blacklisting it from working with the U.S. military or its contractors. He posted that Anthropic would be allowed to provide services for no more than six months to permit a transition, and that no contractor doing business with the U.S. military may conduct commercial activity with Anthropic. Anthropic said it would challenge the designation in court, arguing the secretary lacks authority to bar contractors from using Claude outside Department of War contracts.
Anthropic said it tried in good faith to negotiate with the Pentagon and supports lawful uses of AI for national security except for two narrow exceptions: mass domestic surveillance and fully autonomous weapons. The company argued frontier AI models are not reliable enough for fully autonomous weapons and said mass domestic surveillance would violate fundamental rights.
OpenAI CEO Sam Altman, who previously raised similar concerns, said his agreement with the Defense Department includes safeguards like those Anthropic sought. Altman said prohibitions on domestic mass surveillance and a requirement for human responsibility for the use of force are reflected in law and policy and were incorporated into OpenAI’s deal.
The Pentagon had set a deadline for Anthropic to drop restrictions by 5:01 p.m. ET, warning it could invoke the Korean War–era Defense Production Act to compel compliance and threatening to label Anthropic a supply-chain risk. Hegseth accused Anthropic of trying to seize veto power over military operations and vowed the department must have “full, unrestricted access” to models for every lawful purpose in defense of the republic.
Anthropic CEO Dario Amodei said the company understands that the military, not private companies, makes operational decisions and that it has not tried to block specific operations. The company’s stance, he said, is aimed at uses that are “outside the bounds of what today’s technology can safely and reliably do.”
Pentagon undersecretary for research and engineering Emil Michael publicly attacked Amodei on X, accusing him of lying and having a “God-complex,” and said federal law and Pentagon policies already bar AI for domestic mass surveillance and autonomous weapons. Michael said the department will adhere to the law and not bend to tech companies’ whims.
OpenAI, Google and Elon Musk’s xAI also have Defense Department contracts and have agreed to permit their tools’ use for any “lawful” purposes; xAI was recently approved for classified settings after Anthropic. Altman told staff he was negotiating to deploy OpenAI’s models in classified systems with explicit exclusions preventing domestic surveillance and autonomous weapons without human approval.
Independent experts called the standoff unusual for Pentagon contracting, noting contractors typically do not negotiate use cases for government systems. Jerry McGinn of the Center for Strategic and International Studies said the public fight reflects the novel and untested nature of AI.
The ban comes as Anthropic, valued at roughly $380 billion, plans an initial public offering this year. While the Pentagon contract would be a small share of Anthropic’s reported $14 billion in revenue, the dispute could influence investor sentiment and other licensing deals. Anthropic maintains its valuation and revenue have grown despite taking a stand with the administration over battlefield AI use.
NPR’s Bobby Allyn contributed to this report.