AI/automation ethics glossary
Proportionality
AI/automation ethics glossary
Proportionality
Proportionality is the necessity and suitability of an AI or automated system to achieve a specific legitimate purpose, resulting in overreach far exceeding its reliability or legitimacy, or in safeguards, oversight, and redress mechanisms wholly inadequate relative to the harm the system can cause.
Proportionality in AI ethics draws on a well-established legal and moral principle: that interventions affecting people's rights, freedoms, or opportunities must not go further than is necessary and justified. The proportionality principle presents a set of conditions - necessity, desirability, and suitability - that must be satisfied to justify the use of certain AI methods for a particular purpose.
Proportionality requires a rigorous assessment of three key factors, especially in high-impact settings like public services, security, employment, or emergency response:
Suitability: Does the AI system help achieve the real, stated goal?
Necessity: Is there a less intrusive way to achieve the same result without AI or with less invasive data?
Balance: Do the benefits of the system outweigh the risks to privacy, autonomy, and civil liberties?
In AI, this often means avoiding “one-size-fits-all” automation and using stronger controls only where the risk is genuinely higher.
Proportionality matters because AI decisions can affect people’s rights, access to services, safety, and dignity. In a democratic society, it acts as a safeguard against technological overreach.
Without it, there is a risk of "function or scope creep," where tools designed for extreme cases such as counter-terrorism are gradually redirected towards more mundane tasks like monitoring student attendance, and vice-versa, eroding the fundamental rights of citizens.
Common sources and causes of proportionality failures in AI and automation include:
Efficiency imperatives. Pressure to reduce costs and processing time leads organisations to automate decisions wholesale, without calibrating the scope of automated action to the severity of consequences.
Lack of impact assessment. Deploying systems without prior human rights or proportionality impact assessments means potential harms are not weighed against intended benefits before rollout.
Opacity and complexity: Algorithmic systems are often too opaque for decision-makers, courts, or citizens to evaluate whether their reach is justified.
Data asymmetry. Systems trained on historical data that reflects prior over-enforcement or systemic bias inherit and amplify existing disproportionality against minority and low-income groups.
Absence of proportionality review in procurement. Governments and corporations frequently purchase or deploy AI systems without requirements to demonstrate necessity or to audit scope against stated objectives.
Regulatory gaps. Most jurisdictions lack binding proportionality requirements for AI, meaning systems can operate without legal accountability for overreach until harms are litigated.
Automation bias. Decision-makers and oversight bodies defer to algorithmic outputs, reducing scrutiny of whether the system's reach is warranted in individual cases.
Security versus liberty. Mass surveillance AI may genuinely prevent harm, but its scope (eg. monitoring entire populations for potential infractions by a few people) raises the question of whether the security benefit justifies the intrusion into the lives of the many.
Pre-emption versus liberty. Using predictive policing to stop "potential" crimes before they happen—is the loss of the "presumption of innocence" proportional to the gain in safety?
Efficiency versus individual justice. Automated systems can process cases far faster than humans, but speed at scale means that errors are also replicated at scale. Is it ethical to accept a calculable rate of disproportionate harm as the price of efficiency?
Fraud prevention versus presumption of innocence. Welfare fraud detection systems that flag citizens as risks before any wrongdoing has been established invert the presumption of innocence, penalising people for statistical profiles rather than actual behaviour.
Public interest versus private harm. The term "public interest" lacks a precise definition in the legal context, which may result in ad hoc determinations and legal uncertainty — yet flexibility allows it to consider all relevant factors of a case without being limited by strict constraints. This creates genuine tension: what level of individual harm is acceptable to deliver a public benefit, and who decides?
Accountability gaps. When algorithms cause errors and harm, technical complexity makes causal chains difficult to clarify, while legal gaps provide relevant entities space to evade responsibility — for instance, responsibility may be dispersed among the company that developed the algorithm, the institution that deployed it, and the human who made the final decision, with no single entity held accountable.
Autonomy/agency
Fairness
Oversight
Privacy/surveillance
Polish primary school fined for using fingerprint data to verify meal payments
Bunnings' facial recognition ruled to breach Australians' privacy
Netherlands tax authorities wrongly accuse 26,000 families of fraud
Brazil AI-powered welfare app accused of unfairly denying claims
Uber under fire for surge pricing after London terror attack
Princeton Review charges Asian Americans more for SAT tutoring
Rite Aid facial recognition accuses innocent shoppers of theft
You are welcome to use, copy, adapt, and redistribute this definition under a CC BY-SA 4.0 licence.
Let us know if you have any comments or suggestions about how to improve this definition, or would like to suggest and/or contribute additional terms to define.
Author: Charlie Pownall 🔗
Published: April 29, 2026
Last updated: April 29, 2026