Anthropic and Pentagon head to court as AI firm seeks end to 'stigmatizing' supply chain risk label
Artificial intelligence company Anthropic is asking a federal judge on Tuesday to temporarily halt the Pentagon’s “unprecedented and stigmatizing” designation of the company as a supply chain risk
SAN FRANCISCO (AP) — Artificial intelligence company Anthropic is asking a federal judge on Tuesday to temporarily halt the Pentagon's “unprecedented and stigmatizing” designation of the company as a .
A hearing scheduled for Tuesday in a California federal court marks a critical step in the feud between Anthropic and the Trump administration over how the company's AI technology could be used in war.
earlier this month to stop the Trump administration from enforcing what the company calls an “unlawful campaign of retaliation” over its refusal to allow unrestricted military use of its technology.
The company is asking U.S. District Judge Rita Lin for an emergency order that would temporarily reverse the Pentagon’s decision to designate the AI company a “supply chain risk." Anthropic also seeks to undo President Donald Trump’s order directing all federal employees, not just those in the military, to stop using its AI chatbot Claude.
Lin is presiding over the case in federal court in San Francisco, where Anthropic is headquartered. The AI firm has also filed a separate and more narrow case in the federal appeals court in Washington, D.C.
Lin sent both sides a number of questions she wants them to answer at Tuesday's hearing, including about discrepancies between Defense Secretary Pete Hegseth's formal directive declaring Anthropic a potential threat to national security, and what he posted about it on social media.
Anthropic has said it sought to restrict its technology from being used for mass surveillance of Americans and . Hegseth and other high-ranking officials publicly insisted the company must accept “all lawful” uses of Claude, threatened punishment if Anthropic did not comply and condemned the firm and its CEO Dario Amodei on social media.
When Amodei refused to bend, Trump announced on Feb. 27 he was immediately ordering all federal employees to stop using Anthropic, calling it a “radical left, woke company” that was putting troops at risk. He gave a longer period of six months for the Pentagon to phase out Anthropic's technology, which is already embedded in classified military platforms including those used in the Iran war.
Anthropic's lawsuit alleges the government actions violated the First Amendment and due process laws.
“Put simply, the Executive Branch is leveraging its powers to punish a major American company for the sin of expressing its views on a matter of profound public significance,” it said in a legal filing last week.
Department of Justice lawyers countered in their own court filing last week that the Trump administration's actions targeted Anthropic's commercial conduct, not its free speech rights.
They argued that Anthropic’s behavior during contract negotiations caused the Pentagon to “question whether Anthropic represented a trusted partner” and if its continued access to war-fighting operations introduced an “unacceptable risk to national security.”
“After all, AI systems are acutely vulnerable to manipulation, and Anthropic could attempt to disable its technology or preemptively alter the behavior of its model either before or during ongoing war-fighting operations, if Anthropic — in its discretion — feels that its corporate ‘red lines’ are being crossed,” said the Trump administration filing.
Included in the Trump administration's court filings is an undated memorandum from U.S. Defense Undersecretary Emil Michael, the Pentagon’s chief technology officer and a former Uber executive.
But it's not clear when Michael wrote the memo that expands on the Pentagon's rationale in labeling Anthropic's products as risky. Lin is asking the Pentagon for more details on its timing.
——
O'Brien reported from Providence, Rhode Island.

