Pentagon uses Anthropic's Claude to plan and support Iran airstrikes
Pentagon uses Anthropic's Claude to plan and support Iran airstrikes
Occurred: June 2024-
Page published: March 2026
Report incidentđĽ| Improve page đ| Access database đ˘
The U.S. military reportedly used Anthropicâs Claude AI to facilitate a massive joint air campaign against Iran that killed high-level officials and hundreds of civilians, sparking a national controversy as the deployment occurred just hours after President Trump blacklisted the company for refusing to remove ethical safeguards.
U.S. Central Command used Anthropicâs Claude AI during air operations targeting sites in Iran, including intelligence assessments, target identification, and simulated battle planning.Â
The AI-assisted strikes took place just hours after President Donald Trump publicly ordered U.S. federal agencies to stop using Anthropicâs tools and announced a sixâmonth phase-out, following a political and policy dispute with the company.
Reports indicate Claude was already deeply embedded in classified military workflows; commanders relied on it in live operations despite the ban announcement.Â
Media and advocacy groups highlighted the actual and potential loss of life and injuries in Iran, and the possibility that AIâaccelerated targeting increased the scale and speed of lethal force.Â
In particular, they pointed to attacks on schools and other non-military sites that resulted in hundreds of lives, including children and students, and suggested inaccurate AI may be partly to blame.
The incident is particularly controversial because it occurred immediately following a public directive from President Trump to "immediately cease" all use of Anthropic technology, labeling the firm a "supply-chain risk" due to its refusal to drop safety guardrails.Â
The immediate cause was the Pentagonâs operational dependence on Claude for processing large volumes of sensor and intelligence data, modeling scenarios, and ranking targets more quickly than human analysts alone.Â
Underlying this was a policy clash: the Defense Department (aka "Department of War") pushed for broad, lessârestricted access to Anthropicâs models, while Anthropic resisted dropping safeguards and opposed uses like mass domestic surveillance or fully autonomous lethal weapons.Â
Trumpâs order to sever ties with Anthropic followed the companyâs refusal to provide unrestricted access, but the military had not yet technically or procedurally unwound its dependence on the system.Â
This gap between public policy decisions and entrenched technical systems, combined with limited transparency over how AI tools were integrated into the âkill chain,â created conditions where a banned tool still shaped realâworld lethal operations.
For the victims: The use of AI to "shorten the kill chain" raises urgent questions about accountability for civilian casualties and whether AI-driven targeting increases the speed and scale of destruction.
For society: It marks a shift where private tech companiesâ ethical "red lines" are being characterised by the U.S. government as national security threats. It also highlights a growing "AI arms race" where safety-focused firms are being replaced by competitors such as xAI and OpenAI, who have agreed to "any lawful use" terms.
For policymakers: The incident reveals the "vendor lock-in" trap, where the military becomes so dependent on a specific AI system that it cannot follow executive orders to stop using it without risking operational failure.
Claude
Developer: Anthropic
Country: Iran
Sector: Govt - defence
Purpose: Plan and support airstrikes
Technology: Generative AI
Issue: Accountability; Autonomous weapons; Dual use; Normalisation; Transparency
June 2024: Anthropic begins supporting U.S. military operations via classified networks.
January 2026: Claude is used in the mission to capture Venezuelan President NicolĂĄs Maduro.
February 17â24, 2026: Tensions peak as the Pentagon demands Anthropic remove safeguards; Anthropic CEO Dario Amodei refuses.
February 27, 2026: President Trump orders an immediate ban on Anthropic; Defense Secretary Pete Hegseth designates the company a "supply-chain risk."
February 28, 2026: U.S. and Israel launch strikes on Iran; CENTCOM continues using Claude for real-time targeting and planning.
March 2, 2026: Reports emerge confirming Claude's role in the strikes despite the ban.
AIAAIC Repository ID: AIAAIC2232