AI/automation ethics glossary
Revisionism
AI/automation ethics glossary
Revisionism
Revisionism refers to the use/misuse of an AI/automated system to change an established or accepted doctrine, policy, or historical view.
Revisionism occurs when AI systems generate false narratives, deepfakes, or biased reinterpretations of events by drawing from skewed training data or deliberate fine-tuning.
This can involve rewriting historical records, downplaying atrocities, or fabricating evidence to fit agendas, often amplified by automation in content generation or search results.
History shapes how societies understand themselves - who they are, where they came from, and what they owe each other.
When AI systems distort or rewrite that history, the consequences go far beyond academic inaccuracy to affect collective identity, political legitimacy, justice for victims of past wrongs, and the ability of citizens to make informed decisions about the present.
Technological revisionism can lead to widespread misinformation and disinformation based on minority histories or uncomfortable truths that are "smoothed over" by algorithms trained on majority perspectives.
Long-term, such revisionism distorts education and result in societal polarisation and conflict.
Biased or uncurated training data. AI absorbs whatever is in its training corpus, including existing propaganda, outdated scholarship, and politically motivated rewrites.
Western-centric data dominance. The internet, and therefore AI training data, over-represents western or other cultural and political perspectives
Lack of provenance checking. AI systems typically cannot verify whether sources are authoritative or manipulated.
Deliberate adversarial prompting. Users intentionally steering models toward revisionist outputs.
Incentives for engagement. Platforms optimising for clicks can amplify dramatic or provocative historical narratives over accurate ones.
State-sponsored disinformation. The manipulation of historical events has been largely employed as a tool for foreign interference to achieve strategic objectives.
Technological revisionism poses a range of challenging ethical questions:
Truth versus plurality. Whose version of history is "correct"? Suppressing revisionism risks silencing legitimate minority or indigenous historical perspectives alongside bad-faith distortion.
Openness versus control. Open, publicly editable AI training data enables innovation but also manipulation; closed, curated data raises concerns about who controls the narrative.
Scale versus accuracy. AI can make history more accessible and engaging, but that same scale accelerates the spread of inaccuracies.
Censorship versus harm. Removing revisionist AI content can be portrayed as censorship; leaving it up causes demonstrable harm, especially to survivors of historical atrocities.
Accountability. When an AI system reproduces revisionist history, who is responsible: the developer, the deployer, the user who prompted it, or the original source in the training data?
Author: Charlie Pownall 🔗
Published: May 14, 2026
Last updated: May 14, 2026
You are welcome to use, copy, adapt, and redistribute this definition under a CC BY-SA 4.0 licence.
Let us know if you have any comments or suggestions about how to improve this definition, or would like to suggest and/or contribute additional terms to define.