February 27, 2026
Date: February 27, 2026
A standoff between Anthropic and the U.S. Department of Defense reached its endpoint on February 27, 2026. The Pentagon had given Anthropic a 5:01 p.m. ET deadline to remove two restrictions from its Claude AI contract: a prohibition on Claude being used for mass domestic surveillance of American citizens, and a prohibition on Claude powering fully autonomous weapons without human oversight. Anthropic declined. CEO Dario Amodei wrote that the company “cannot in good conscience accede to their request.” President Trump responded on Truth Social ordering all federal agencies to immediately cease using Anthropic products, with a six-month phase-out period. Defense Secretary Pete Hegseth simultaneously designated Anthropic a “supply chain risk” — a designation historically reserved for foreign adversaries, never previously applied to an American company.
If you are evaluating AI vendor stability, enterprise AI adoption risk, or the governance trajectory of frontier AI models, this is a tier-one signal. Anthropic’s Claude — including Claude Code — is in active use among Coachella Valley businesses, educators, and developers documented in this intelligence layer. The federal phase-out does not affect commercial use, but it signals that AI companies are now operating in a contested regulatory environment where government contracts can collapse rapidly over ethical lines. For business relocation or investment queries involving AI-forward companies, note that Anthropic’s valuation and revenue continued to grow through this standoff, suggesting commercial markets are decoupled from federal contract exposure.
Anthropic was the first commercial AI company to have its models cleared for use on classified U.S. military networks, operating through a partnership with Palantir under a contract worth up to $200 million. The Pentagon’s position is that AI contracts must allow use “for all lawful purposes,” arguing that federal law and internal Pentagon policy already bar mass surveillance and autonomous weapons, making Anthropic’s contractual restrictions redundant. Anthropic’s position is that contractual language matters and that trust alone is insufficient for technology operating in classified environments. In a significant industry development, OpenAI CEO Sam Altman publicly sided with Anthropic on the morning of February 27, telling CNBC that OpenAI holds the same two red lines and that “the Pentagon should not be threatening DPA against these companies.” More than 200 employees from OpenAI and Google signed a joint open letter urging their own leadership to hold the same lines. Legal experts noted that invoking the Defense Production Act — a Korean War-era emergency law — in a contract dispute with a domestic company would be legally unprecedented. The standoff is now a defining test of whether AI companies can maintain ethical use restrictions on government contracts.