DOGE uses ChatGPT to cancel "woke" U.S. government humanities grants
DOGE uses ChatGPT to cancel "woke" U.S. government humanities grants
Occurred: 2025-
Page published: March 2026
Over 1,400 approved U.S. National Endowment for the Humanities (NEH) grants were terminated using ChatGPT, causing at least USD 100 million in financial losses to museums, libraries, and scholars, while raising grave concerns about due process and ideological bias.
U.S. Department of Government Efficient (DOGE) operatives used ChatGPT to conduct an "audit" of the National Endowment for the Humanities (NEH), feeding the summaries of thousands of active grants into the AI with a prompt asking for a "Yes/No" response on whether the projects related to Diversity, Equity, and Inclusion (DEI).
Based on these 120-character AI outputs, the NEH issued immediate termination notices to hundreds of institutions, resulting in some institutions facing bankruptcy, stopping projects, and firing staff.
Over USD 100 million in funding was clawed back by DOGE.
Targeted projects included a documentary on Jewish slave labour during the Holocaust, the digitisation of Appalachian historical records, and archives of Indigenous languages.
The NEH itself saw roughly 70 percent of its staff placed on administrative leave or fired.
The incident was driven by a desire for rapid, "revolutionary" cost-cutting that bypassed traditional bureaucratic and scholarly review.
Rather than experts reviewing the grants, DOGE relied on a Large Language Model (LLM) to "tag" complex historical and cultural research based on a narrow list of keywords like "BIPOC," "LGBTQ," and "gender."
DOGE operated as an "outside" advisory body, allowing its staff to bypass federal hiring rules and conflict-of-interest laws.
Discovery in a 2026 lawsuit revealed that DOGE and NEH staff used Signal, with auto-delete enabled, to communicate, a direct violation of the Federal Records Act intended to hide the decision-making process.
The controversy sets a precedent for "algorithmic purging" in government, where AI is used to provide a veneer of "efficiency" to politically motivated actions.
For society: It signals a shift in which public investment in culture and history is filtered through the biases of a chatbot and a small group of unelected individuals.
For policymakers: It highlights the urgent need for regulations (like the Federal Advisory Committee Act) to be updated to prevent AI from being used to circumvent Congressional spending authority.
For the U.S. legal system: It raises questions about whether an AI's output constitutes "rational basis" for government action under the Fifth Amendment’s Due Process and Equal Protection clauses.
ChatGPT
Developer: OpenAI
Country: USA
Sector: Govt - culture
Purpose: Identify "wasteful" spending
Technology: Generative AI
Issue: Accountability; Fairness; Transparency
November 2024. President-elect Trump announces the creation of DOGE, led by Elon Musk and Vivek Ramaswamy.
March 12, 2025. DOGE operatives arrive at the NEH and begin ChatGPT-powered audit.
April 2, 2025. The first wave of grant termination notices is sent to museums and libraries.
May 1, 2025. A coalition of humanities groups (ACLS, AHA, MLA) files a lawsuit to reverse the cancellations.
March 6, 2026. Court discovery documents are released, revealing the specific ChatGPT prompts and the use of encrypted messaging to evade oversight.
AMERICAN COUNCIL OF LEARNED SOCIETIES v. ADAM WOLFSON
AIAAIC Repository ID: AIAAIC2242