A federal judge in San Francisco on Thursday issued a preliminary injunction blocking the Pentagon’s designation of Anthropic as a “supply chain risk” and temporarily halted President Trump’s directive ordering federal agencies to stop using the company’s AI model, Claude. The order pauses the government’s actions while the court decides the merits of Anthropic’s legal challenge.
Judge Rita F. Lin of the U.S. District Court for the Northern District of California said the supply chain risk label is typically reserved for foreign intelligence actors and terrorists, not U.S. companies. “These broad measures do not appear to be directed at the government’s stated national security interests,” she wrote, adding that if the concern were operational integrity, the Department of War “could just stop using Claude.” She concluded, “Instead … these measures appear designed to punish Anthropic.”
The dispute began after Anthropic CEO Dario Amodei publicly said the company would not permit Claude to be used for autonomous weapons or for surveilling American citizens. The Pentagon has argued that it is the military’s prerogative to decide how to use tools it purchases and that Anthropic’s restrictions rendered the company untrustworthy. The administration responded by designating Anthropic a supply chain risk and by President Trump directing all federal agencies to cease use of Claude.
Anthropic filed two federal lawsuits alleging the designation is illegal retaliation for the company’s stance on AI safety and that it will cause economic harm by barring Pentagon contractors from doing business with Anthropic. The suits also claim the government violated Anthropic’s First Amendment rights.
A range of organizations filed amicus briefs supporting Anthropic, including Microsoft, the ACLU, and groups of retired military leaders. At a recent hearing, Judge Lin signaled skepticism of the government’s position and indicated she was inclined to grant a preliminary injunction, viewing the ban as punishment for Anthropic’s public disagreement with the government.
The Pentagon told the court the designation reflected security concerns, including the theoretical possibility that Anthropic could update Claude in ways that might threaten national security. In her order, however, Lin described the designation as “likely both contrary to law and arbitrary and capricious,” and rejected the idea that the statute supports branding an American company “a potential adversary and saboteur of the U.S.” for disagreeing with the government.
Lin also noted that the Pentagon had previously praised Anthropic and subjected it to rigorous national security vetting. She wrote that only after the company raised public concerns about potential military uses did the Pentagon “announce a plan to cripple Anthropic: to blacklist it from doing business with any company that services the U.S. military, to permanently cut off its ability to work with the federal government, and to brand it an adversary” — conduct she characterized as “classic First Amendment retaliation.”
Anthropic said it was grateful the court moved swiftly and expressed pleasure that the judge found the company likely to succeed on the merits. A Pentagon spokesperson declined to comment about ongoing litigation.
Legal and policy observers called the decision significant. Jennifer Huddleston, a senior fellow in technology policy at the Cato Institute, said the injunction suggests the judge believes Anthropic is likely to prevail on legal claims and highlighted broader questions about protecting companies from retaliation for exercising First Amendment rights and ensuring adequate due process when government actions could cripple businesses.
