Anthropic Defies Pentagon Ultimatum, Refuses Unrestricted Military Access to Claude AI
The AI company Anthropic refuses to grant the US Department of Defense unrestricted access to its AI model Claude.
The Pentagon had given the company an ultimatum until today, Friday, and threatens to invoke the Defense Production Act as well as classify it as a supply chain risk if Anthropic does not lift its usage restrictions.
Two Central Restrictions
Anthropic insists on two specific exceptions regarding military use of its AI model Claude:
- Domestic mass surveillance: The company rejects its use for comprehensive surveillance of the US population. While Anthropic supports its use for legitimate foreign intelligence operations, it views AI-powered mass surveillance within its own country as incompatible with democratic values.
- Fully autonomous weapons systems: Claude should not be used for weapons systems that select and attack targets without human control. Anthropic argues that today’s AI systems are not yet reliable enough for such applications and that appropriate safeguards are lacking.
CEO Dario Amodei said that the company has never objected to legitimate military operations and fundamentally recognizes the importance of AI for the defense of democratic states. The Claude LLM reportedly was used in the Maduro operation in Venezuela.
Contradictory Threats from the Department of Defense
Defense Secretary Pete Hegseth demands that Anthropic agree to “any lawful use” and remove the mentioned safeguards. The two threatened measures contradict each other:
- Classification as a supply chain risk would prevent the government from using Anthropic’s products
- The Defense Production Act is intended to force the company to provide its model free of charge
A senior Pentagon official stated: “The only reason we’re still talking to these people is that we need them, and we need them now. The problem for these people is that they’re so good.”
Strategic Importance of Claude
Anthropic is currently the only AI company whose system runs on classified military systems. According to the Pentagon, Claude is used for various security-related tasks:
- Intelligence analysis
- Modeling and simulation
- Operations planning
- Cyber operations
Although the Department of Defense has or is pursuing agreements with other providers such as xAI (Grok) and Google (Gemini), Claude is considered technically superior. According to the Department of Defense, removing it from Pentagon systems would be extremely costly.
Possible Use in Venezuela
According to a Wall Street Journal report from February 2026, Claude may have already been used in a US military operation in Venezuela, which involved bombings in Caracas and, according to the Venezuelan Defense Ministry, resulted in 83 deaths. The use apparently took place through Anthropic’s partnership with Palantir Technologies.
This could contradict Anthropic’s own terms of use, which explicitly prohibit its use for violent purposes, weapons development, and surveillance measures. An Anthropic spokesperson declined to comment on this specific use.
Legal and Political Dimension
The application of the Defense Production Act to a software company in the context of a dispute over AI usage limits would represent a significant expansion of this law, which was originally created for the production of war-critical goods.
Jessica Tillipman, associate dean at George Washington University Law School, warned: “The Pentagon knows it’s making an extreme threat. The bigger problem is that this dilutes these designations. They’re turning what was conceived as national security instruments into a lever for business purposes.”
Anthropic’s Position
Anthropic has stated that the two requested restrictions have not previously posed an obstacle to military use. The company emphasized that it wishes to continue supporting the Department of Defense, but cannot in good conscience accede to the demands.
Should the Pentagon remove Anthropic from its systems, the company has pledged a smooth transition to another provider to avoid disruptions to critical military missions. For Anthropic, it ultimately also comes down to a contract worth approximately 200 million dollars.
