AI/automation ethics glossary
Representation
AI/automation ethics glossary
Representation
Representation refers to the use/misuse of an AI/automated system to portray an individual, group, or idea in a manner that is misleading or untrue, and results in unfairness, harm, injustice, or the denial of dignity to the represented entity.
Representation problems begin with data: if training data under-represents certain genders, ethnicities, ages, accents, disabilities, or regions, the system may perform worse for them or treat them as exceptions.
It also includes design choices, such as whose needs are prioritised, which languages are supported, and whether people affected by the system had any input into how it was built.
Representation also affects outputs. An AI system can mislabel people, present stereotyped images or text, or treat one group as the default “normal” user while others are handled poorly or invisibly.
In automation, the same issue appears when workflows, forms, or decision rules assume a narrow range of bodies, identities, work patterns, or life circumstances.
Representation matters because AI and automation increasingly shape access to jobs, healthcare, education, media, finance, and public services. Missing or misrepresented groups can receive worse outcomes, less accurate decisions, or fewer opportunities.
It also matters socially because repeated distortions can normalise stereotypes and weaken trust in AI systems and institutions. In the long run, poor representation can make automation less reliable, less inclusive, and less legitimate.
xx
Authenticity & integrity
Fairness
Human rights & civil liberties
Author: Charlie Pownall 🔗
Published: May 17, 2026
Last updated: May 17, 2026
You are welcome to use, copy, adapt, and redistribute this definition under a CC BY-SA 4.0 licence.
Let us know if you have any comments or suggestions about how to improve this definition, or would like to suggest and/or contribute additional terms to define.