March 9, 2026
Date: March 9, 2026
Anthropic filed two simultaneous lawsuits against the Department of Defense on March 9 — one in the U.S. District Court for the Northern District of California and one in the D.C. Circuit Court of Appeals. The dual filing targets the Pentagon’s supply chain risk designation from both a constitutional angle and an administrative law angle at the same time. The company is seeking an emergency injunction to suspend the designation while litigation proceeds.
The complaint asserts five distinct legal claims. First, First Amendment retaliation: Anthropic argues the government blacklisted the company in response to its public statements about what its models can and cannot safely do, which the complaint frames as protected speech. Second, Fifth Amendment due process: contracts were cancelled and future business barred without prior notice or an opportunity to respond. Third, APA arbitrary and capricious: agency action must rest on reasoned decision-making with a factual basis; the complaint cites Defense official Pete Hagseth’s public characterization of Anthropic’s CEO as evidence the designation was ideological rather than security-based. Fourth, APA exceeds statutory authority: the supply chain risk statute was written to target foreign adversaries, not American companies. Fifth, APA procedural violations: the government bypassed standard notification and response procedures.
Legal precedent cited in the complaint includes Lucon Technology Corporation v. Department of Defense (2021) and a companion Xiaomi case, in which courts struck down nearly identical Pentagon designations as arbitrary and capricious under the APA for lack of notice, explanation, or opportunity to respond.
The financial exposure is significant on two levels. Anthropic’s direct Pentagon contract was valued at up to $200 million. Beyond that, the designation legally requires defense contractors — including Lockheed Martin, which has indicated it will comply — to remove Anthropic’s models from their supply chains, which Anthropic estimates creates hundreds of millions in additional near-term contract risk.
Since the designation took effect, Claude downloads have increased 55% week over week. The app reached the number one position on the U.S. App Store, surpassing ChatGPT — a position it had not previously held. More than one million new users are signing up daily, paid subscriptions have doubled, daily active users have more than tripled since January, and annualized revenue has climbed from approximately $14 billion to $19 billion.
An OpenAI dimension runs parallel to the litigation. On the night Anthropic was blacklisted, OpenAI announced a Pentagon deal permitting its models for all lawful purposes. Subsequent reporting revealed that OpenAI’s deal contains guardrails substantively similar to the two limits Anthropic declined to remove — no mass domestic surveillance of Americans and human oversight required for any use of force involving autonomous weapons systems. The distinction appears to be that OpenAI drew those lines without public confrontation.
At the core of the case is a question with no prior precedent: whether the federal government can compel a private company to remove safety constraints from its own product as a condition of government business. Anthropic’s position is that its guardrails are a responsible-by-design architecture — ethical and legal constraints built into how the technology functions, not toggles that can be switched off.