President Trump ordered all federal agencies to stop using Anthropic’s AI products and the Pentagon moved to label the company a supply‑chain risk, sharply escalating a dispute over military use of artificial intelligence. Hours after the announcement, OpenAI said it had reached an agreement with the Defense Department to make its technology available for classified networks.
The clash centers on Anthropic’s effort to limit how its model, Claude, could be used under a potential military contract worth up to $200 million. Anthropic sought to prohibit domestic mass surveillance of Americans and fully autonomous weapons — restrictions it said were necessary because frontier AI is not yet reliable enough for those uses. Trump, posting on Truth Social, called the company’s stance a “DISASTROUS MISTAKE” and ordered an immediate cessation of Anthropic technology across the federal government, with a six‑month phaseout.
Defense Secretary Pete Hegseth, using a rebranded department label as “Department of War,” said he was designating Anthropic a supply‑chain risk and blacklisting the company from working with the U.S. military or its contractors. Hegseth said Anthropic could provide services for no more than six months to allow a transition and barred any contractor doing business with the military from conducting commercial activity with Anthropic. Anthropic said it would challenge that designation in court, arguing the secretary lacks authority to prevent contractors from using Claude outside Department of War contracts.
Anthropic says it negotiated in good faith and supports lawful national security uses of AI, but it draws two narrow lines: no domestic mass surveillance and no fully autonomous weapons without human control. CEO Dario Amodei has said the company does not seek to block specific military operations and that its limits are aimed at uses that exceed what today’s technology can safely and reliably do.
OpenAI CEO Sam Altman, who has voiced related concerns in the past, said OpenAI’s deal with the Defense Department includes safeguards similar to those Anthropic sought. Altman said prohibitions on domestic mass surveillance and a requirement that humans remain responsible for use of force are reflected in law and policy and were incorporated into OpenAI’s agreement.
The Pentagon gave Anthropic a deadline to remove the restrictions by 5:01 p.m. ET and warned it could invoke the Korean War–era Defense Production Act to compel compliance, while also threatening the supply‑chain risk designation. Hegseth accused Anthropic of attempting to exert veto power over military operations and said the department must have “full, unrestricted access” to models for every lawful defense purpose.
Pentagon undersecretary for research and engineering Emil Michael attacked Amodei on X, accusing him of dishonesty and a “God‑complex,” and said federal law and Pentagon policy already bar AI use for domestic mass surveillance and autonomous weapons. Michael said the department will follow the law and not yield to tech companies’ preferences.
Other major AI firms, including OpenAI, Google and Elon Musk’s xAI, also hold Defense Department contracts and have agreed to permit their tools’ use for any “lawful” purposes; xAI was recently approved for classified settings after Anthropic. Altman told OpenAI staff he was negotiating deployment of the company’s models in classified systems with explicit exclusions preventing domestic surveillance and autonomous weapons without human authorization.
Independent experts called the public standoff unusual for Pentagon contracting, noting that vendors typically do not negotiate specific use cases for government systems. Jerry McGinn of the Center for Strategic and International Studies said the dispute highlights the novel and untested nature of AI in military contexts.
The move comes as Anthropic, which the report described as valued at roughly $380 billion, plans an initial public offering this year. While the Pentagon contract would represent a small portion of Anthropic’s reported $14 billion in revenue, the dispute could affect investor sentiment and other licensing arrangements. Anthropic maintains its valuation and revenue have continued to grow even as it took a public stance on battlefield AI use.
NPR’s Bobby Allyn contributed to this report.