A U.S. federal court is closely examining the government’s decision to classify AI company Anthropic as a national security risk, raising concerns over whether the move was justified.
Rita Lin, a U.S. District Judge, questioned the Pentagon’s rationale during a 90-minute hearing in San Francisco. The case stems from the administration of Donald Trump, which labeled Anthropic as a supply chain risk after the company resisted the use of its AI in fully autonomous weapons and domestic surveillance.
“What is troubling to me about these actions is they don’t seem to be tailored to the national security concerns,” Lin said during the hearing.
Anthropic has challenged the designation, calling it part of an “unlawful campaign of retaliation.” The company has filed a lawsuit seeking an emergency order to remove the label, along with a separate case in a federal appeals court in Washington, D.C.
While the judge expressed concerns about how the administration handled the matter, no immediate ruling was issued. Both sides have been asked to submit additional evidence, with a decision expected before the end of the week.
At the center of the dispute is a broader debate over how AI technologies should be used in military and surveillance contexts. “It’s a fascinating public policy debate, but it’s not my role to decide who is right in that debate,” Lin noted, clarifying that her focus is on whether the government acted improperly.
Anthropic also pointed to reputational damage following a February 27 statement by Trump, which criticised the company and ordered federal agencies to stop using its technology, including its Claude chatbot. The Pentagon was given 6 months to phase out Anthropic’s tools, which are already integrated into classified systems, including those linked to the Iran war.
Legal counsel for Anthropic, Michael Mongan, argued that the company has suffered “irreparable and mounting injuries” due to the designation. However, Justice Department lawyer Eric Hamilton maintained that the government should have “substantial deference” in identifying security risks, describing Anthropic as an “untrustworthy and unreliable partner” in recent negotiations.
The Defense Department has also stated it will continue to operate independently of influence from technology companies.
The case highlights growing tensions between governments and AI firms over control, ethics, and national security as the technology rapidly evolves.
Also read: Viksit Workforce for a Viksit Bharat
Do Follow: The Mainstream LinkedIn | The Mainstream Facebook | The Mainstream Youtube | The Mainstream Twitter
About us:
The Mainstream is a premier platform delivering the latest updates and informed perspectives across the technology business and cyber landscape. Built on research-driven, thought leadership and original intellectual property, The Mainstream also curates summits & conferences that convene decision makers to explore how technology reshapes industries and leadership. With a growing presence in India and globally across the Middle East, Africa, ASEAN, the USA, the UK and Australia, The Mainstream carries a vision to bring the latest happenings and insights to 8.2 billion people and to place technology at the centre of conversation for leaders navigating the future.



