Pentagon uses Anthropic's Claude to capture Venezuela president Maduro
Pentagon uses Anthropic's Claude to capture Venezuela president Maduro
Occurred: January 2026
Page published: February 2026
The U.S. military used Anthropic’s Claude AI model to help plan and execute a high-stakes raid in Caracas to capture Venezuelan President Nicolás Maduro, an operation that resulted in dozens of deaths and sparked a contractual standoff between the Pentagon and the AI firm.
The Pentagon used Anthropic’s Claude AI model (integrated via Palantir Technologies) to assist in the planning and operational phases of a U.S. special operations forces aid on Caracas, Venezuela, to bomb military complexes and abduct President Nicolás Maduro and his wife, Cilia Flores.
The raid resulted in 83 to 100 people killed, including security personnel and dozens of Cuban nationals, serious damage to the capital’s infrastructure, including localised power outages and a state of national emergency, and the removal of a head of state.
The controversy stems from the rapid, opaque integration of Claude into U.S. classified military operations.
While Anthropic’s "Acceptable Use Policy" explicitly prohibits using Claude for violence, weapons development, or mass surveillance, the model was provided to the Pentagon through Palantir, a third-party contractor. This middleman arrangement created a "transparency fog" where the AI developer was allegedly unaware of the specific lethal mission its tool was supporting.
Anthropic positions itself as a "safety-first" company, but the Trump administration’s "AI-first warfighting" strategy prioritises speed and tactical dominance. The Pentagon argues that private companies should not be allowed to impose "ethical vetoes" on lawful military operations, leading to a direct clash between corporate safety constitutions and national security imperatives.
For society: The fracas raises the spectre of "black box" warfare in which AI tools, even if originally designed for benign tasks such as coding or document analysis, are repurposed for lethal targeting without permission, clear legal frameworks, or meaningful public discussion.
For policymakers: It triggered a showdown over the U.S. Defense Production Act, with Secretary of War Pete Hegseth threatening to use wartime powers to force Anthropic to remove safety guardrails, essentially treating AI safety protocols as a "supply chain risk."
For the AI industry: It forces a choice between "Conscience or Contracts." Companies like xAI have already agreed to "all lawful purposes" standards, potentially sidelining safety-conscious developers and creating a "race to the bottom" for AI ethics in the defence sector.
Claude
Developer: Anthropic
Country: USA; Venezuela
Sector: Govt - defence
Purpose: Plan and implement military operation
Technology: Generative AI
Issue: Accountability; Autonomy; Dual use; Transparency
July 2025: The DoD signs USD 200 million contracts with Anthropic, Google, and OpenAI; Claude becomes the only LLM approved for classified networks.
November 2024: Anthropic partners with Palantir and AWS to deploy Claude on classified systems.
January 3, 2026: The U.S. military conducts the Caracas raid; Maduro is captured and flown to New York.
February 14, 2026: Wall Street Journal reveals Claude’s role in the operation.
February 24, 2026: Defense Secretary Pete Hegseth delivers an ultimatum to Anthropic to drop usage restrictions or face blacklisting.
February 25, 2026: The Pentagon reaches a deal with Elon Musk’s xAI to deploy the "Grok" model on classified networks, ending Anthropic’s exclusive status.
AIAAIC Repository ID: AIAAIC2220