AI/automation ethics glossary
Mis/disinformation
AI/automation ethics glossary
Mis/disinformation
Mis/disinformation refers to the use/misuse of an AI/automated system to create and/or share information and data that deceives - accidentally or deliberately - the general public and others.
Misinformation is false or inaccurate information (getting facts wrong) while disinformation is intentionally fabricated and shared specifically to mislead and manipulate.
Mis/disinformation involves the use of Generative AI (text, images, audio, and video) to create highly convincing "fake" content, often referred to as deepfakes.
It also involves algorithmic amplification, where social media recommender systems prioritise sensationalist or polarising false content because it drives higher user engagement (clicks, likes, and shares).
Automation allows this content to be spread at a speed and volume that manual fact-checking cannot match.
AI-enabled mis/disinformation matters because it quietly erodes the foundations of trust, democracy, and collective reality.
The pervasive threat of disinformation is underscored by its multifaceted impact on democratic institutions, public trust, and global stability. In the World Economic Forum's Global Risks Report 2024, misinformation and disinformation consistently ranked as the foremost severe short-term threat over the next two years.
Users who are the targets of disinformation are rarely aware they are being targeted. They make decisions - about how to vote, what to believe about a health crisis, whether to trust a government, or whether a person said something on the basis of fabricated information. The harm is often irreversible: a reputation destroyed, a vote cast, a panic triggered, a relationship broken.
In a climate of pervasive distrust toward mainstream media, science, and other established sources of knowledge, AI-driven disinformation can further amplify uncertainty about what or whom to believe.
It may also be used to delegitimise accurate yet incongruent information by falsely claiming it was generated by AI. This "liar's dividend" is particularly cruel to victims: not only can false things be made to seem real, but real things can be dismissed as fake.
When AI-enabled mis/disinformation goes unchecked, the consequences for individuals and communities are wide-ranging and serious:
Democratic harm. Romania's 2024 presidential election results were annulled after evidence showed AI-powered interference using manipulated videos - the targeted and proficient nature of the interference had serious ramifications for future Romanian electoral integrity. Victims here were an entire electorate.
Personal reputational destruction. Individuals who are the subjects of deepfakes or fabricated stories - politicians, celebrities, or private citizens - may find their reputations permanently damaged, with limited legal recourse and little ability to "un-spread" the false content.
Psychological harm. Exposure to persistent false narratives, fabricated images of oneself, or AI-generated audio of one's voice being used to say things one never said causes measurable psychological distress, anxiety, and loss of a sense of personal security.
Public health damage. Mis/disinformation around vaccines, treatments, and disease has life-or-death consequences. Users who trust AI-generated health content, or AI-polluted search results, may make medical decisions on a false basis.
Undermining of science. By presenting AI-generated misinformation or disinformation as neutral and authoritative, such efforts can erode public trust in genuine scientific consensus, confuse policy debates, and delay necessary action.
Financial fraud. Over 6% of fraud incidents now involve deepfakes, with victims deceived by synthetic voices or faces of people they trust - bosses, family members, banks.
AI and automation-powered mis/disinformation has become a major issue for a variety of reasons:
Accessibility of tools. User-friendly AI tools, notably generative AI systems, allow anyone to create high-quality deceptive media with zero technical expertise.
Economic incentives. The "attention economy" rewards engagement over accuracy; fake news often generates more ad revenue.
Platform incentives. Engagement-optimising algorithms amplify emotionally resonant content, regardless of truth.
Influence operations. State and non-state actors using automated "troll farms" to destabilise foreign adversaries.
AI model hallucinations. AI systems produce confident falsehoods without intent, creating misinformation and disinformation at scale simply through unreliable outputs.
Freedom versus safety. How much content moderation or labeling is acceptable without censoring legitimate speech?
Responsibility allocation. Should platforms, developers, governments, or individual users bear primary duty to prevent harm?
Transparency versus misuse. Revealing model internals or provenance helps verification but can also teach bad actors to create more convincing falsehoods.
Accuracy trade-offs. Automated detection risks false positives that wrongly remove or label truthful content, harming legitimate voices.
Authenticity/integrity
Representation
Revisionism
Safety
Author: Charlie Pownall 🔗
Published: May 13, 2026
Last updated: May 13, 2026
You are welcome to use, copy, adapt, and redistribute this definition under a CC BY-SA 4.0 licence.
Let us know if you have any comments or suggestions about how to improve this definition, or would like to suggest and/or contribute additional terms to define.