A judge by the name of Rita F. Lin asked a question in public court somewhere in a federal courthouse in San Francisco that most court cases manage to avoid: was the US government attempting to destroy an American business? It wasn’t a rhetorical question. It was aimed at attorneys for the Department of Defense in a lawsuit brought by Anthropic, one of the most well-funded AI firms globally, following the Pentagon’s designation of the company as a national security supply-chain risk due to its refusal to remove safety limitations from its AI models.
It’s worth taking a moment to consider that sequence of events because it’s more bizarre than nearly anything that occurred in American technology policy last year. Anthropic was penalized by its own government for upholding precisely those principles, despite the company’s entire public identity being based on AI safety and the notion that AI systems should have safeguards and that those safeguards are important. The Pentagon desired that Anthropic’s Claude model be made available for use in domestic surveillance and autonomous weapons systems. Anthropic decreased.
In response, the company received a formal supply-chain risk designation from Defense Secretary Pete Hegseth. This designation had previously only been applied to foreign adversaries and compromised foreign-made hardware. On social media, President Trump issued an order for the federal government as a whole to cease using Claude. The ensuing legal dispute is still ongoing.
| Category | Details |
|---|---|
| Company | Anthropic — AI safety company, maker of the Claude family of AI models; structured as a Public Benefit Corporation |
| Founded | 2021, by Dario Amodei, Daniela Amodei, and colleagues departing OpenAI |
| Headquarters | San Francisco, California |
| Valuation | $380 billion (Series G funding round closed shortly before the blacklist announcement) |
| Funding Raised | $30 billion Series G round — demonstrating strong investor confidence entering 2026 |
| Annual Recurring Revenue (ARR) | Approximately $14 billion total; public sector ARR had quadrupled in recent months before the designation |
| Pentagon Contract at Issue | $200 million Department of Defense contract — approximately 1.4% of total ARR |
| Projected 2026 Revenue at Risk | Several billion dollars, per Anthropic CFO warning; up to all of projected public sector ARR (~$500M+) potentially eliminated |
| Pentagon Designation | “Supply-chain risk” label — first time such a designation has been applied to a major U.S. technology company |
| Reason for Designation | Anthropic refused to remove guardrails preventing use of Claude for autonomous weapons systems and domestic surveillance |
| Legal Status | U.S. District Judge Rita F. Lin issued preliminary injunction temporarily blocking Pentagon ban; D.C. Circuit Court of Appeals declined to grant stay — blacklist remains for defense contractors while case proceeds |
| Next Legal Milestone | Oral arguments scheduled for May 2026 at D.C. Circuit Court of Appeals |
| Microsoft’s Role | Azure is now the only cloud platform offering both Claude and GPT frontier models; Anthropic embedded in Microsoft Foundry on Azure |
| Key Quote | Federal Judge Lin questioned whether the Pentagon was trying to “cripple” Anthropic — noting supply-chain risk designations are typically reserved for foreign threats |
Even for a company that recently closed a $30 billion funding round at a $380 billion valuation, the financial exposure here is substantial enough to be taken seriously. The CFO of Anthropic has cautioned that the blacklist could result in a billion-dollar drop in revenue in 2026. The $200 million direct Pentagon contract in question only makes up about 1.4% of the company’s $14 billion in recurring revenue annually. Systemic dynamics are more harmful.
The reputational blast radius of a company carrying a Department of Defense supply-chain risk designation goes far beyond the Pentagon’s own procurement choices. Federal agencies, enterprise buyers with ties to the government, and defense contractors all begin reevaluating their exposure. For a project near the government, one partner has already moved from Claude to a rival model. The true financial harm arises when that type of covert defection is multiplied over a sufficient number of contracts. Prior to the designation, Anthropic’s public sector ARR had increased fourfold in recent months. The growth has now been suspended.
There is genuine uncertainty about the outcome, which is reflected in the way the legal picture is divided. While the case is pending, federal agencies outside of direct defense contracts may continue to use Claude thanks to a preliminary injunction issued by a federal judge that temporarily blocked the Pentagon ban.
In the same hearing where she questioned whether the government was attempting to cripple Anthropic, Judge Lin hinted that the designation might be retaliation for the company’s public advocacy on AI safety. This would raise First Amendment issues that significantly complicate the government’s position. However, the D.C. Circuit Court of Appeals refused to grant a stay, so defense contractors and Pentagon-adjacent procurement are still subject to the blacklist. With no clear winner, the case moves on to oral arguments in May.

It is more difficult to ignore the wider ramifications for the AI sector than the result of a particular case. A supply-chain risk designation is a tool designed to address threats from foreign adversaries, not domestic firms with safety policies. This is the first time a major U.S. technology company has received one. The precedent is unsettling if the government’s stance is upheld: AI firms that continue to impose usage restrictions on their models may be excluded from federal work without the procedural protections that typically come with suspension or debarment. Every other AI company selling to the government would receive a fairly obvious implicit message. After seeing the results, other AI companies will adjust their own contract negotiations.
This narrative is further complicated by Microsoft’s growing partnership with Anthropic. Microsoft has been actively incorporating Claude into its Foundry enterprise stack, and Azure is currently the only cloud platform that offers both Claude and GPT frontier models. Because of this, Anthropic is no longer just an independent AI firm defending a single government contract. The government’s attempt to freeze it out has indirect effects on Microsoft’s positioning as well as the larger cloud AI ecosystem because it is a company that is integrated into the infrastructure of large enterprise AI deployment. Tracing the supply chain forward makes it difficult to ignore how what initially appears to be a targeted procurement dispute begins to resemble something more significant.
As this develops, there’s a sense that the case has brought to light a tension that has always existed but is rarely this apparent: the discrepancy between how AI companies discuss safety as a value and what happens when the world’s most influential customer requests that they put that value aside. Anthropic declined. In response, the government used the most powerful commercial tool at its disposal. Anthropic’s revenue forecasts and the conditions on which all American AI companies bargain with the federal government in the future will be influenced by the courts’ final determination that the response was reasonable, legal, and not retaliatory.
