A United States court has temporarily halted the Pentagon’s blacklisting of Anthropic, marking a significant development in the company’s dispute with the military regarding the safety of artificial intelligence (AI) in combat situations. The lawsuit, filed in a California federal court by Anthropic, accuses U.S. Secretary of War Pete Hegseth of exceeding his authority by designating Anthropic as a national security supply-chain risk without due process. The company alleges a violation of its First Amendment right to free speech and its Fifth Amendment right to due process.
District Judge Rita Lin, appointed by former President Joe Biden, sided with Anthropic in a 43-page ruling. However, the judgment will not take immediate effect, as there is a seven-day window for the administration to pursue an appeal. Hegseth’s decision to blacklist Anthropic came after the company resisted the military’s request to utilize its AI chatbot, Claude, for surveillance or autonomous weaponry, resulting in Anthropic being barred from certain military contracts. This move could potentially lead to significant financial losses and reputational damage for the company.
Anthropic argues that AI models are not yet sufficiently reliable for use in autonomous weapons and opposes domestic surveillance as an infringement on individual rights. Conversely, the Pentagon maintains that private entities should not dictate military operations, clarifying that they are only interested in utilizing AI technology within legal boundaries. Judge Lin indicated that the government’s actions seemed motivated by a desire to penalize Anthropic rather than genuine national security concerns.
In response to the ruling, an Anthropic spokesperson, Danielle Cohen, expressed satisfaction with the outcome, emphasizing the company’s commitment to collaborating with the government for the benefit of all Americans. Anthropic’s designation as a supply-chain risk under a government-procurement statute is unprecedented for a U.S. company and has raised legal challenges regarding the decision’s validity and impact on military operations. The Justice Department argues that Anthropic’s stance could introduce uncertainty into Pentagon operations involving Claude, potentially jeopardizing military systems during critical missions.
Additionally, Anthropic faces another legal battle in Washington over a separate supply-chain risk designation that could result in its exclusion from civilian government contracts. This ongoing dispute underscores the complexities surrounding AI technology and its integration into military and government operations.
