A U.S. judge has temporarily blocked the Pentagon’s decision to blacklist AI company Anthropic, marking a significant development in an ongoing dispute over the use of artificial intelligence in military operations.
The ruling, issued on March 26, 2026, halts the move by Defense Secretary Pete Hegseth, who had labeled Anthropic a national security supply-chain risk. The designation followed the company’s refusal to allow its AI chatbot Claude to be used for U.S. surveillance or autonomous weapons programs, effectively cutting it off from certain military contracts.
Anthropic challenged the decision in a California federal court, arguing that the designation exceeded the Secretary’s authority. The company also claimed the government violated its First Amendment rights by retaliating against its stance on AI safety and its Fifth Amendment rights by not allowing it to contest the designation.
U.S. District Judge Rita Lin sided with Anthropic in a 43-page ruling, stating that the government’s actions appeared punitive rather than tied to national security concerns. However, the order will take effect after 7 days, giving the administration time to appeal.
“The record supports an inference that Anthropic is being punished for criticising the government’s contracting position in the press,” Lin wrote.
“Punishing Anthropic for bringing public scrutiny to the government’s contracting position is classic illegal First Amendment retaliation,” the judge added.
Anthropic has maintained that current AI models are not reliable enough for use in autonomous weapons and has opposed domestic surveillance on rights grounds. The Pentagon, however, argued that private companies should not restrict military capabilities.
The blacklisting could have cost Anthropic billions in lost business and reputational damage, according to company executives. The case also marks the first time a U.S. company has been publicly labeled a supply-chain risk under a government procurement law designed to prevent foreign interference in military systems.
An Anthropic spokesperson said the company welcomed the decision.
“While this case was necessary to protect Anthropic, our customers, and our partners, our focus remains on working productively with the government to ensure all Americans benefit from safe, reliable AI,” the statement said.
The Justice Department argued that Anthropic’s refusal to adjust its terms could create uncertainty in military use of Claude and risk operational disruptions. The government said the designation was based on contractual disagreements, not the company’s views.
Anthropic has also filed a second lawsuit in Washington, D.C., challenging a separate designation that could impact its eligibility for civilian government contracts.
Also read: Viksit Workforce for a Viksit Bharat
Do Follow: The Mainstream LinkedIn | The Mainstream Facebook | The Mainstream Youtube | The Mainstream Twitter
About us:
The Mainstream is a premier platform delivering the latest updates and informed perspectives across the technology business and cyber landscape. Built on research-driven, thought leadership and original intellectual property, The Mainstream also curates summits & conferences that convene decision makers to explore how technology reshapes industries and leadership. With a growing presence in India and globally across the Middle East, Africa, ASEAN, the USA, the UK and Australia, The Mainstream carries a vision to bring the latest happenings and insights to 8.2 billion people and to place technology at the centre of conversation for leaders navigating the future.



