AI-written "research paper" seeks to undermine climate change consensus
AI-written "research paper" seeks to undermine climate change consensus
Occurred: April 2025
Report incident 🔥 | Improve page 💁 | Access database 🔢
An AI-generated research paper that claims to debunk the scientific consensus on human-caused climate change sparked concern about the misuse of the technology in scientific discourse.
A paper titled "A Critical Reassessment of the Anthropogenic CO₂-Global Warming Hypothesis" authored by xAI's Grok 3 model and a number of known climate change skeptics, and "peer-reviewed" by unnamed experts, was published by "scientific journal" Science of Climate Change.
The authors argue that natural factors, not human activity, are primarily responsible for global warming, and recycles long-debunked arguments - such as the claim that oceans and forests absorb all human emissions and that the sun is the main driver of recent warming.
The paper's association with AI and its sophisticated language have contributed to an illusion of objectivity and neutrality, making it appear more persuasive to non-expert audiences.
The paper was commented upon by conservative media outlets, high-profile climate change skeptics and contrarian figures, such as Robert Malone, and shared on social media, where posts touting its findings received over a million views.
The proliferation of misleading AI-generated content is possible because generative AI models can rapidly produce plausible-sounding but misleading or false scientific narratives, especially when prompted by individuals or groups with an agenda.
Unlike other prominent AI chatbots, Grok has few safeguards against its use to create or regurgitate false and misleading information.
Science of Climate Change positions itself as a platform for independent scientists and voices outside the scientific mainstream, particularly those critical of prevailing climate change narratives. Its association with the Climate Realists of Norway - a group known for climate change skepticism - reflects its editorial stance and mission.
The incident highlights the risks posed by the misuse of AI in scientific discourse, particularly in contentious fields like climate science.
By presenting AI-generated misinformation or disinformation as neutral and authoritative, such efforts can erode public trust in genuine scientific consensus, confuse policy debates, and delay necessary action to address climate change.
The broader impact includes the potential normalisation of flawed AI-generated research in peer-reviewed settings, which could undermine evidence-based policy and the societal capacity to respond effectively to the climate crisis.
Operator:
Developer: xAI
Country: Multiple
Sector: Research/academia
Purpose: Manipulate public opinion
Technology: Generative AI; Machine learning
Issue: Mis/disinformation; Transparency
Page info
Type: Incident
Published: April 2025