![Anthropic alleged the ban violates its freedom of speech and due process rights [File: Dado Ruvic/Reuters]](https://i0.wp.com/www.aljazeera.com/wp-content/uploads/2026/03/2026-03-19T104842Z_2052602485_RC2EWJAU69SB_RTRMADP_3_USA-PENTAGON-ANTHROPIC-1774365861.jpg?ssl=1)
San Francisco, United States: A California judge has set the stage for a potential victory for Anthropic in its push for regulation of weapons powered by artificial intelligence, a drawback for the administration of United States President Donald Trump, which brings the company a step closer to not losing billions in government contracts.
The Trump administration had designated Anthropic a “supply chain risk” for its stance on increased regulation, a move that would block the company from certain military contracts.
The United States Department of Defense may be illegally trying to punish Anthropic for attempting to restrict the use of its artificial intelligence (AI) models for weapons without human supervision or for mass surveillance, a district judge has said.
“It looks like an attempt to cripple Anthropic,” Judge Rita Lin of the Northern California district court said on Tuesday.
Legal analysts say this could set the stage for providing Anthropic a preliminary injunction from being labelled a supply chain risk by the Defense Department.
“Their stated objectives are not completely backed by the Department of War,” Charlie Bullock, senior research fellow at the Institute for Law and AI, a Boston-based think tank, said about the Defense Department’s designation of Anthropic as a supply chain risk.
This is the first time a US company has been designated as such and it would entail cancelling government contracts as well as those of government contractors.
On March 17, the Defense Department told the court that Anthropic’s stance that its products not be used for AI-powered weapons without human oversight or for domestic surveillance would undercut its “ability to control its own lawful operations”.
Anthropic’s lawsuit to remove the designation is unfolding as being about the extent of AI’s capacities, how they could shape life and whether they will be regulated.
“This case is a kind of moment when to reflect on what kind of relations we want between the government and companies and what rights citizens have,” says Robert Trager, co-director of Oxford University’s Oxford Martin AI Governance Initiative.
Alison Taylor, clinical associate professor of business and society at New York University’s Stern School of Business, said, “In the US, technology is moving ahead like a freight train and any idea of human oversight is getting harder. But people are concerned about AI-related job losses, data centres, surveillance and weapons. This has meant public opinion is shifting away from AI.”
Over the last two weeks, a range of tech companies, think tanks and legal groups filed court briefs in support of Anthropic’s stance, asking for oversight and regulation of AI for weapons and mass surveillance. That support ranges from Microsoft and the employees of Anthropic’s competitors OpenAI and Google Inc, to the Catholic Moral Theologians and Ethicist, among others.
In their brief, engineers from OpenAI and Google DeepMind, filing in their personal capacities said, the case is of “seismic importance for our industry” and that regulation is crucial since AI models’ “chain of reasoning is often hidden from their operators, and their internal workings are opaque even to their developers. And the decisions they make in lethal contexts are irreversible.”
Against the backdrop of such concerns, NYU’s Taylor said, “Anthropic is making a risky but good bet that positioning itself as an ethical AI company will give it a hand in shaping regulation when it does happen.”