A federal judge in San Francisco said Tuesday the government’s ban on Anthropic looks like punishment after the AI company publicly clashed with the Pentagon over possible military uses of its Claude model. U.S. District Judge Rita F. Lin made the remark at the start of a hearing on Anthropic’s request for a preliminary injunction in a lawsuit challenging the Pentagon’s designation of the company as a supply chain risk — a label that has effectively shut Anthropic out of government contracting.
Judge Lin said the move “looks like an attempt to cripple Anthropic,” expressing concern the government might be retaliating against the company for speaking out about its safety policies. She said she expected to rule in the coming days on whether to temporarily block the designation while the litigation proceeds.
The hearing in the Northern District of California is the latest development in a dispute with broad implications for how the government regulates and uses AI. Anthropic CEO Dario Amodei announced in late February a policy forbidding Claude’s use for autonomous weapons and for surveillance of U.S. citizens. President Trump then directed federal agencies to stop using Anthropic’s products.
Earlier this month the Pentagon labeled Anthropic a supply chain risk, citing national security concerns. That designation is typically applied to entities seen as foreign adversaries or otherwise able to harm U.S. interests. Anthropic has filed two federal suits — one in the Northern District of California and another in the D.C. federal appeals court — arguing the designation is illegal retaliation for its public stance on AI safety. The company says the label will cost it customers and revenue by barring Pentagon contractors from working with it, and that the government overstepped the law and violated Anthropic’s First Amendment rights.
At Tuesday’s hearing, Anthropic’s lawyers noted this appears to be the first time the supply chain risk label has been used against a U.S. company. Judge Lin acknowledged the Pentagon’s authority to choose which AI tools it uses but questioned whether banning Anthropic across agencies, and Defense Secretary Pete Hegseth’s public urging that contractors sever ties, crossed a legal line.
Lin described the government’s actions as troubling because they were not narrowly tailored to the stated security concerns — concerns she suggested could be addressed simply by the Pentagon stopping its own use of Claude — and instead resembled punishment.
The government’s attorney countered that the moves were not retaliatory but reflected a substantive disagreement about acceptable uses of the model, and noted Anthropic could theoretically modify Claude in ways that pose future risks.
Anthropic did not immediately respond to a request for comment. A Pentagon spokesperson declined to comment on ongoing litigation.