Pages from the Anthropic website and the company’s logo are displayed on a computer screen in New York on Thursday, Feb. 26, 2026. Patrick Sison/AP
A federal judge in San Francisco said Tuesday that the government’s ban on Anthropic appeared punitive after the AI company went public with its dispute with the Pentagon over potential military uses of its Claude model.
U.S. District Judge Rita F. Lin made the comment at the start of a hearing on Anthropic’s request for a preliminary injunction in one of its suits challenging the Pentagon’s designation of the company as a “supply chain risk,” a label that has effectively blacklisted it from government contracting.
“It looks like an attempt to cripple Anthropic,” Lin said, expressing concern the government might be punishing the company for criticizing its policy. She said she expected to rule in the coming days on whether to temporarily block the ban while the court resolves the case.
The hearing in the Northern District of California is the latest chapter in a dispute with implications for how the government can regulate and use AI. Anthropic CEO Dario Amodei announced in late February that he would prohibit Claude from being used for autonomous weapons or to surveil U.S. citizens. President Trump then ordered all federal agencies to stop using Anthropic’s products.
Earlier this month the Pentagon labeled Anthropic a supply chain risk, citing national security concerns. That designation is typically reserved for entities viewed as foreign adversaries or otherwise capable of undermining U.S. interests.
Anthropic has filed two federal lawsuits — in the Northern District of California and the D.C. federal appeals court — arguing the designation amounts to illegal retaliation for its public stance on AI safety. The company says the label will cost customers and revenue by barring Pentagon contractors from doing business with it and that the move oversteps the law and violates its First Amendment rights.
At Tuesday’s hearing, Anthropic’s lawyers said this appears to be the first time such a designation has been applied to a U.S. company. Judge Lin acknowledged the Pentagon’s authority to choose the AI products it uses but questioned whether the government violated the law by banning Anthropic across agencies and when Defense Secretary Pete Hegseth said contractors should sever ties with the company.
Lin called the government’s actions “troubling” because they were not narrowly tailored to address the stated national security concerns — concerns that, she suggested, could be addressed simply by the Pentagon ceasing to use Claude — and instead resembled punishment of Anthropic.
The government’s lawyer countered that the actions were not retaliatory but were based on a substantive disagreement over acceptable uses of the model, not on Anthropic’s speech. The government also argued that Anthropic could theoretically update Claude in ways that pose future security risks.
Anthropic did not immediately respond to an email request for comment. A Pentagon spokesperson said the agency does not comment on ongoing litigation.
