On 24 March 2026, a US federal judge considered a temporary injunction to halt the Pentagon’s decision to blacklist Anthropic from a previously agreed contract with the US Department of War (DOW). The case, Anthropic PBC v. US Department of War, centres on whether the government acted lawfully in designating the company a “supply chain risk” — a label never before applied to an American company — and whether Anthropic’s removal from defence systems followed proper procedure.

US District Judge Rita Lin in San Francisco is reported by Reuters to have said the action “looks like an attempt to cripple Anthropic” and appeared to be “punishing Anthropic for trying to bring public scrutiny to this contract dispute.” The DOW defended its decision as a matter of national security, arguing it has broad discretion over which entities participate in sensitive defence ecosystems.

But behind the legal filings lies a deeper and more consequential disagreement: who gets to decide how the most powerful AI systems are used in warfare? The answer has implications far beyond the Pentagon.


Editor’s note: This article draws on Anthropic’s public statement and court filings, and on a detailed account given by Under Secretary of War Emil Michael on the All-In Podcast (Episode 263, March 2026). Michael’s claims about private meetings and communications with Anthropic have not been independently verified. Anthropic was not contacted for comment on Michael’s specific claims ahead of publication. AI-360 welcomes a response from Anthropic or any other party.


What Happened: The DOW’s Account

The dispute has its roots in a contract negotiation that began in mid-2025. Emil Michael, the Under Secretary of War for Research and Engineering, has given a detailed public account of the DOW’s perspective on the All-In Podcast. According to Michael, when he took over the department’s AI portfolio in August 2025, he reviewed Anthropic’s existing contracts and found extensive use restrictions — prohibitions on using the company’s models to plan kinetic strikes, reposition satellites, conduct certain war-gaming scenarios, and other operational applications.

Michael described a three-month negotiation in which he would present specific military scenarios — such as using AI to help intercept a Chinese hypersonic missile within a 90-second decision window — and Anthropic would grant individual exceptions. But the DOW wanted a blanket standard permitting “all lawful use,” which Anthropic would not accept. “The exceptions approach doesn’t work,” Michael said on the podcast. “I can’t predict for the next 20 years what all the things we might use AI for.”

According to Michael, during a meeting with senior Pentagon officials attended by approximately 20 people, Anthropic CEO Dario Amodei suggested that Michael call him if a new exception was needed. Michael characterised this as untenable: “What if the balloon’s going up at that moment and it’s a decisive action we have to take? I’m not going to call you to do something. It’s not rational.” Anthropic’s account of this meeting has not been made public.

Michael said the situation escalated after the US military operation in Venezuela. He alleged that an Anthropic executive contacted Palantir — the prime contractor through which the DOW accessed Anthropic’s models — to ask whether the company’s software had been used in the raid. Michael characterised this as an attempt to obtain classified information and said it implied that Anthropic might retroactively enforce its terms of service against the department. He said this raised alarm that access to the AI system could be withdrawn or restricted mid-conflict. Anthropic has not publicly commented on this specific allegation.

Michael said he brought his concerns to Secretary of War Pete Hegseth. “That was a moment for the whole leadership at the Pentagon,” he said, “that we’re potentially so dependent on a software provider without another alternative that has the right or ability to shut it off.”

Anthropic’s Position

Anthropic’s account of the dispute, set out in a public statement by Amodei on 26 February 2026 and in subsequent court filings, presents a fundamentally different framing. Amodei began his statement by affirming his commitment to national defence: “I believe deeply in the existential importance of using AI to defend the United States and other democracies, and to defeat our autocratic adversaries.” He detailed Anthropic’s history of cooperation with the DOW before drawing a clear line:

“In a narrow set of cases, we believe AI can undermine, rather than defend, democratic values. Some uses are also simply outside the bounds of what today’s technology can safely and reliably do. Two such use cases have never been included in our contracts with the Department of War, and we believe they should not be included now.”

The two use cases are mass domestic surveillance and fully autonomous weapons. On autonomous weapons, Amodei argued that the technology is not yet reliable enough for life-and-death decisions without human oversight. On surveillance, Anthropic contended that AI’s capabilities go qualitatively beyond existing monitoring tools and that permitting their use for bulk data collection on citizens would be incompatible with democratic values.

Anthropic’s statement alleged that the DOW had threatened to designate the company a supply chain risk and to invoke the Defence Production Act to force the removal of safeguards. In its court filings, Anthropic argues the blacklisting was imposed in “retaliation” for expressing its core values.

The DOW’s Rebuttal

Michael has directly challenged Anthropic’s framing of the dispute as being principally about autonomous weapons and mass surveillance. On the All-In Podcast, he said the DOW is “not even close” to deploying fully autonomous weapons in high-risk scenarios, and that current work focuses on basic drone autonomy and collaborative aircraft systems. He described the autonomous weapons debate as premature, pointing to the Golden Dome missile defence programme as a case where AI-assisted decision-making in a low-risk environment — intercepting threats in space — should be uncontroversial.

On surveillance, Michael said the actual point of contention was far more mundane than Anthropic’s public framing suggested. He described a dispute over whether the DOW could use Anthropic’s tools to aggregate publicly available information — such as looking up someone’s LinkedIn profile — noting that the department is not legally permitted to conduct domestic surveillance and that it also runs schools, hospitals, and recruitment operations alongside combat functions.

Michael accused Anthropic of turning the negotiation into what he called a “PR game,” choosing the two issues — autonomous weapons and mass surveillance — that would be most alarming to the public. “We’re the Department of War. We’re not the FBI. We’re not Homeland Security,” he said. It should be noted that Anthropic’s court filings frame these same issues as genuine safety concerns central to the dispute, not as a communications strategy.

On the supply chain risk designation itself, Michael said it was protective rather than punitive. His concern extended beyond the DOW’s own use of Anthropic’s models to the broader defence ecosystem: he did not want contractors such as Lockheed Martin or Boeing using a model with what he described as embedded policy biases to design weapons systems. “Boeing wants to use Anthropic to build commercial jets? Have at it,” he said. “Boeing wants to use it to build fighter jets? I can’t have that.”

He also cited hypothetical scenarios he considered unacceptable risks to justify the designation: the possibility of a rogue developer interfering with the model, of the system producing unreliable outputs at a critical moment, or of it refusing instructions during an operation. He noted that Anthropic held the control plane for its model in AWS GovCloud, meaning it retained the ability to modify model weights. There is no public evidence that any of these scenarios have occurred; Michael raised them as risks inherent in the DOW’s dependency on a single vendor with its own use restrictions.

How the Rest of the Industry Responded

One notable element of the dispute is how other major AI companies have responded. According to Michael, xAI’s Grok agreed to all lawful use immediately across both classified and unclassified networks. Google was cooperating with the DOW and building out infrastructure to deploy on classified systems. OpenAI’s Sam Altman, whom Michael described as “a real patriot,” offered to help stand up as an alternative provider and even tried to dissuade the DOW from applying the supply chain risk label, arguing it would be bad for the industry as a whole. These accounts reflect Michael’s characterisation of his conversations with these companies; none have publicly confirmed these details.

Michael said he wants all the major models available for redundancy and does not view Anthropic’s offering as uniquely advantaged. He noted that capability gaps between leading models are narrowing, though he acknowledged Anthropic was ahead on certain products and, critically, ahead in terms of enterprise sales integration with the DOW thanks to its forward-deployed engineering approach.

Political Fallout

Following Anthropic’s public statement, the dispute quickly became political. President Donald Trump ordered every federal agency to immediately cease use of Anthropic’s technology, writing on Truth Social: “We don’t need it, we don’t want it, and will not do business with them again!”

Chamath Palihapitiya, All-In Podcast co-host and venture capitalist, framed the episode as a warning about platform risk, comparing it to the social media deplatforming controversies but “times a thousand.” He argued that reliance on a single AI provider introduces operational risk due to evolving policies, internal philosophies, and usage constraints imposed by the vendor. As AI systems become embedded in core workflows, he suggested organisations may need to adopt multi-model strategies, ensure portability across providers, and treat vendor alignment as a key factor in procurement and risk management.

Earlier Signals

In early February 2026, before the dispute became public, Mrinank Sharma, a former Anthropic safety researcher, announced his resignation in a public statement reflecting on risks associated with AI and global instability. He wrote: “The world is in peril. And not just from AI, or bioweapons, but from a whole series of interconnected crises unfolding in this very moment.” He added that during his time at Anthropic he had “repeatedly seen how hard it is to truly let our values govern our actions,” both within himself, within the organisation, and in broader society.

Sharma’s statement does not reference the DOW dispute or any specific internal disagreement. It is included here as context for the broader environment at Anthropic in the period leading up to the confrontation, not as evidence of a direct connection.

What this Means for Enterprise AI

For enterprise AI stakeholders, the implications of this case are operational rather than theoretical. Several points stand out.

First, access to government markets — and potentially other regulated sectors — can be contingent on factors that go beyond technical capability or commercial readiness. An AI vendor’s values, governance framework, and willingness to accommodate customer use cases are now procurement variables.

Second, the dispute exposes vendor concentration risk in a new light. If a single AI provider can restrict functionality based on its own policy preferences, any organisation with deep dependency on that provider faces a form of governance risk that is distinct from technical failure or commercial disagreement. Multi-model strategies are no longer just about performance benchmarking; they are about operational resilience.

Third, the case highlights an emerging layer of AI governance shaped by national security considerations. For enterprises deploying advanced AI systems, this introduces variables including vendor exposure to government action, the stability of access to critical models, and the extent to which external oversight may constrain system availability. These factors sit alongside more familiar concerns such as cost, performance, and scalability, but are increasingly material to long-term deployment strategy.

Finally, the fundamental question the case raises — who decides the boundaries of AI use in high-stakes contexts? — remains unresolved. The DOW argues that is the province of government and the law. Anthropic argues that AI companies have a responsibility to set limits that the law has not yet caught up to. How courts, regulators, and the market resolve that tension will shape the enterprise AI landscape for years to come.

Anthropic was not contacted for comment ahead of publication. AI-360 welcomes a response from Anthropic or any party referenced in this article.


Share this post
The link has been copied!