A U.S. judge has issued a temporary injunction blocking the Pentagon’s blacklisting of Anthropic, marking the latest development in the company’s contentious battle with the military regarding AI safety in combat settings. Anthropic filed a lawsuit in a California federal court, alleging that U.S. Secretary of War Pete Hegseth exceeded his authority by classifying Anthropic as a national security supply-chain risk. This classification is applied to companies that could potentially expose military systems to infiltration or sabotage by adversaries.
Claiming a violation of its First Amendment right to free speech, Anthropic argued that the government retaliated against its stance on AI safety without granting the company an opportunity to challenge the designation, thus infringing on its Fifth Amendment right to due process. U.S. District Judge Rita Lin, appointed by former President Joe Biden, sided with Anthropic in a 43-page ruling. However, the injunction will not take immediate effect, allowing the administration a seven-day window to file an appeal.
Hegseth’s unprecedented action, triggered by Anthropic’s opposition to the military’s use of its AI chatbot Claude for surveillance or autonomous weapons, has barred Anthropic from certain military contracts. Executives at Anthropic have expressed concerns that this decision could lead to substantial financial losses and harm to the company’s reputation.
Anthropic argues that current AI models lack the reliability necessary for safe deployment in autonomous weapons and opposes domestic surveillance as a violation of rights. While the Pentagon asserts that private companies should not impede military operations, it clarified that it has no interest in using AI technology for these purposes and would only deploy it within legal boundaries.
In her ruling, Judge Lin questioned the government’s motives, suggesting that the actions taken against Anthropic appeared punitive rather than driven by national security concerns. Anthropic’s spokesperson, Danielle Cohen, welcomed the court’s decision, emphasizing the company’s commitment to collaborating with the government to ensure the safe and beneficial use of AI technology for all Americans.
Anthropic’s designation as a supply-chain risk under a government-procurement statute is a first for a U.S. company, aimed at safeguarding military systems from foreign sabotage. The company’s lawsuit, filed on March 9, challenges the legality of the decision, arguing that it lacks factual support and contradicts previous commendations of Claude by the military.
The Justice Department contends that Anthropic’s refusal to lift restrictions could introduce uncertainty in Pentagon operations involving Claude, potentially jeopardizing military systems during missions. The government maintains that the designation was a result of Anthropic’s reluctance to accept contractual terms rather than its stance on AI safety.
Additionally, Anthropic faces another legal battle in Washington over a separate Pentagon supply-chain risk classification that could result in its exclusion from civilian government contracts.
