AI/automation ethics glossary
Normalisation
AI/automation ethics glossary
Normalisation
Normalisation is the process - declared or otherwise - by which an automated system shifts from being viewed as novel, disruptive, or ethically questionable to being seen as standard, routine, or essential.
Normalisation operates through several distinct mechanisms:
Creeping acceptability: Harms or practices that initially provoked concern are absorbed into everyday life through slow, incremental adoption with no single obvious moment of consent.
Desensitisation: Repeated exposure to AI and automation failures, misinformation, deepfakes, or surveillance diminishes emotional and moral reactions to them.
Institutional embedding: Once AI and automation systems are embedded in organisational workflows, they become treated as authoritative and neutral, their flaws and assumptions made invisible.
Regulatory capture and lag: The speed of adoption of AI and automatiomn systems outpaces the formation of protective norms and laws, creating a culture of permissiveness.
When harmful automation becomes normal, society may stop noticing rights violations, discrimination, or poor accountability. That can reduce trust in institutions and make it harder to reverse bad design choices once they are widespread.
It also shifts moral responsibility away from humans and onto the system, even though people and organisations still made the design and deployment decisions.
The breakdown of ethical norms through normalisation can have wide-ranging consequences:
Erosion of oversight. The risk is to make decisions blindly without critical evaluation, silencing the capability of the human subject to distinguish between the fair and the unfair. As automation bias deepens, meaningful human oversight becomes nominal rather than real.
Entrenched surveillance. Even in democratic societies, the deployment of facial recognition technology in public spaces raises concerns about the erosion of civil liberties and the normalisation of surveillance. Once surveillance infrastructure is embedded and accepted, its removal or restriction becomes politically difficult.
Proliferation of synthetic harms. Since incidents involving deepfake video began rising after 2023, they have outnumbered those linked to autonomous vehicles, facial recognition, and content moderation algorithms combined. As synthetic media proliferates and normalises, public ability to distinguish real from fabricated content degrades.
Common sources and drivers of normalisation in AI contexts include:
Incremental deployment. AI systems are introduced gradually, with each step seeming modest and reasonable.
Speed of adoption outpacing governance. AI is developing quickly, far ahead of most people's understanding of it, including global regulators and lawmakers.
Commercial incentives. Vendors and deployers have strong financial motivations to minimise perceived risk and friction.
Cognitive habituation. Human psychology naturally adapts to repeated stimuli, reducing sensitivity to ongoing risks.
Crisis exploitation. Emergencies (pandemics, security threats) are used to justify rapid AI deployments that then persist and expand.
Absence of visible, attributable harm. When harms are diffuse, statistical, or delayed, public and institutional responses are muted.
Authority and institutional trust. Systems endorsed by governments or corporations are granted unwarranted legitimacy.
Normalisation generates genuine ethical tensions that resist easy resolution.
Convenience versus vigilance. Automation and AI-assisted decision-making offer genuine efficiency gains. Requiring constant critical scrutiny of AI outputs imposes costs on users and organisations but without that scrutiny, normalisation proceeds unchecked.
Public safety versus civil liberties. Surveillance technologies justified by security benefits gradually normalise intrusions on privacy. While a majority were in favour of police use of facial recognition technology, 55% of respondents believed that the government should limit police use, expressing concerns about normalisation of surveillance, lack of consent, and lack of trust. Dataprotectionpeople
Progress versus precaution. Restricting AI deployment to prevent normalisation of harmful practices may slow beneficial innovation; permitting rapid deployment accelerates normalisation.
Individual consent versus collective exposure. Even individuals who resist particular AI systems are affected when those systems alter the social environment around them, reshaping norms they did not agree to change.
Transparency versus intelligibility. Making AI systems more transparent does not necessarily make them more understandable to the public or decision-makers, meaning transparency can itself be a mechanism of normalisation, creating a false sense of accountability.
Consent
Privacy/surveillance
Safety
You are welcome to use, copy, adapt, and redistribute this definition under a CC BY-SA 4.0 licence.
Let us know if you have any comments or suggestions about how to improve this definition, or would like to suggest and/or contribute additional terms to define.
Author: Charlie Pownall 🔗
Published: May 6, 2026
Last updated: May 6, 2026