USA

Pentagon Issues Friday Deadline: Classify Anthropic as Supply Chain Risk or Seize AI Model

Dario Amodei, Chief Executive Officer and Co-Founder, Anthropic. © World Economic Forum / Sandra Blaser
Dario Amodei, Chief Executive Officer and Co-Founder, Anthropic. © World Economic Forum / Sandra Blaser

The U.S. Department of War has issued an ultimatum to AI company Anthropic with a deadline of Friday afternoon. War Secretary Pete Hegseth is demanding that the company provide unrestricted access to its AI model Claude for military purposes. If Anthropic fails to meet these demands, the Trump administration threatens to invoke the Defense Production Act and classify the company as a supply chain risk.

Pentagon’s Contradictory Threats

The two threatened measures are fundamentally contradictory. While classifying the company as a supply chain risk would prevent the government from using Anthropic’s products, the Defense Production Act is intended to force the company to provide its model free of charge. According to Pentagon insiders, this contradiction reflects both the degree of frustration over Anthropic’s resistance and the importance the model has gained for the military.

The Defense Production Act was originally created to mandate the production of goods critical to war efforts. In recent times, it was also used during the COVID-19 pandemic to manufacture medical equipment. Applying it to a software company in the context of a dispute over AI usage limits would represent a significant expansion of the scope of this law.

Anthropic’s Position: Limits on AI Use

Anthropic insists on certain assurances regarding how its AI model Claude may be used. The company demands guarantees that the model will not be deployed for the following purposes:

  • Mass surveillance of the U.S. population
  • Autonomous weapons systems without human oversight, such as in drone operations
  • Development of weapons that fire without human involvement

An Anthropic spokesperson stated that the company wants to support the government but must ensure that its models are used in accordance with what they can “reliably and responsibly deliver.” CEO Dario Amodei emphasized in a meeting with Hegseth on Tuesday that his company has never objected to or obstructed legitimate military operations.

Pentagon: Lawful Use Is Our Responsibility

Pentagon representatives argue that responsibility for the lawful use of software and weapons lies with them, a responsibility they take seriously. The agency cannot allow all contractors to specify how equipment sold to the Pentagon may be used. The only restriction must be lawful use.

A senior Pentagon official said: “The only reason we’re still talking to these people is because we need them, and we need them now. The problem for these people is that they’re that good.”

Claude’s Importance to the Military

Anthropic is currently the only AI company whose system runs on classified military systems. The company has developed Claude Gov, a specialized model with different safeguards and restrictions than publicly available versions.

Although the Pentagon has agreements with or is pursuing agreements with other providers, Claude is considered superior:

  • xAI by Elon Musk: An agreement to use Grok exists, but integration into the classified system is still underway. Grok is considered less precise than Claude.
  • Google: Negotiations over the Gemini model are ongoing, but no deal has been reached yet.
  • OpenAI: Systems are used for research purposes.

According to the Department of Defense, removing Claude from Pentagon systems would be extremely costly.

Consequences of the Threats

Classification as a supply chain risk would force other companies to choose between doing business with the U.S. military or with Anthropic. This classification has previously been reserved for foreign adversaries. For Anthropic, this could be more severe than the loss of the contract worth up to $200 million promised last summer for developing agentic AI workflows.

Jessica Tillipman, associate dean at George Washington University Law School, commented: “The Pentagon knows it’s making an extreme threat. They’re using every button or lever they have. The bigger problem is that this dilutes these classifications. They’re turning what was conceived as national security instruments into a lever for business purposes.”

Claude’s Use in Venezuela and Possible Violations

According to a Wall Street Journal report from February 14, 2026, Anthropic’s AI model Claude was used by the U.S. military during an operation to kidnap Venezuela’s President Nicolás Maduro. This is a prominent example of how the U.S. Department of Defense uses artificial intelligence in its operations.

The U.S. attack on Venezuela included bombings in the capital Caracas and resulted in the deaths of 83 people according to Venezuela’s defense ministry. Claude was apparently deployed through Anthropic’s partnership with Palantir Technologies, a Department of Defense contractor. It remains unclear exactly how the tool was used, whose capabilities range from processing PDFs to controlling autonomous drones.

Contradiction with Anthropic’s Terms of Use

Anthropic’s terms of use explicitly prohibit the use of Claude for the following purposes:

  • Violent purposes
  • Weapons development
  • Conducting surveillance operations

Use in a military operation that resulted in bombings and deaths could violate these guidelines. An Anthropic spokesperson declined to comment on whether Claude was used in the operation but emphasized that any use of the AI tool must comply with usage guidelines. The U.S. Department of Defense and Palantir did not comment on the allegations.

This incident could further escalate the dispute between Anthropic and the Pentagon. While CEO Dario Amodei calls for regulation to prevent harm from AI use and shows restraint regarding autonomous lethal operations and surveillance in the United States, the military appears to already be using Claude in precisely the contexts the company wants to exclude going forward.

Rank My Startup: Erobere die Liga der Top Founder!
Advertisement
Advertisement

Specials from our Partners

Top Posts from our Network

Deep Dives

© Wiener Börse

IPO Spotlight

powered by Wiener Börse

Europe's Top Unicorn Investments 2023

The full list of companies that reached a valuation of € 1B+ this year
© Behnam Norouzi on Unsplash

Crypto Investment Tracker 2022

The biggest deals in the industry, ranked by Trending Topics
ThisisEngineering RAEng on Unsplash

Technology explained

Powered by PwC
© addendum

Inside the Blockchain

Die revolutionäre Technologie von Experten erklärt

Trending Topics Tech Talk

Der Podcast mit smarten Köpfen für smarte Köpfe
© Shannon Rowies on Unsplash

We ❤️ Founders

Die spannendsten Persönlichkeiten der Startup-Szene
Tokio bei Nacht und Regen. © Unsplash

🤖Big in Japan🤖

Startups - Robots - Entrepreneurs - Tech - Trends

Continue Reading