AI/automation ethics glossary
Autonomy/agency
AI/automation ethics glossary
Autonomy/agency
Autonomy/agency refers to the ability/inability of an AI/automation system's users or people impacted by it to maintain control over their own decisions and independence, and to the ethical risks that arise when that independence exceeds appropriate bounds or operates without adequate accountability.
Autonomy/agency involves the delegation of decision-making power from humans to algorithms and machines. It encompasses several key components:
Algorithmic decision-making. Systems and AI agents making consequential choices (e.g., in hiring, criminal justice, or medical diagnoses) without meaningful human oversight.
Loss of control. Scenarios where humans become reliant on automation and lose the ability to intervene or understand how decisions are reached.
Moral agency. The philosophical and legal debate over whether AI can or should be held accountable for its actions when it causes harm.
Autonomous AI can improve speed, efficiency, and scalability, but it can also weaken human oversight and make harmful decisions harder to stop.
It matters to society because people may lose meaningful control over decisions that affect jobs, finance, safety, privacy, or rights.
When decision-making is handed over to AI, informed consent, transparency, and accountability can all become harder to preserve.
When autonomy norms break down, the consequences span multiple domains:
Erosion of human agency: Individuals may feel deskilled or stripped of their decision-making capacity as automated systems make choices on their behalf.
Reduced accountability: "Black-box" AI can obscure responsibility, making it difficult to seek legal or moral recourse for harmful outcomes.
Bias and inequality: Automated systems acting without human oversight can enforce discriminatory practices, disproportionately affecting vulnerable populations.
Common sources and causes of the loss of autonomy/agency include:
Opaque algorithms. Complex machine learning models whose inner workings are not transparent to users.
Over-reliance (automation bias). The human tendency to trust automated outputs over personal judgment or evidence.
Profit motive. The drive to cut costs and increase efficiency by replacing human decision-makers with scalable automation.
Autonomy/agency presents several challenging ethical dilemmas, including:
Paternalism versus autonomy. The tension between using AI to protect or assist individuals (eg. content filtering) and restricting their personal liberty to explore or choose.
Paternalism versus autonomy. The tension between using AI to protect or assist individuals (e.g., content filtering) and restricting their personal liberty to explore or choose.
Loss of moral agency. As machines take on more human-like agency, questions arise regarding the erosion of human moral responsibility.
Consent
Oversight
Power inbalance
Privacy/surveillance
Safety
Security
You are welcome to use, copy, adapt, and redistribute this definition under a CC BY-SA 4.0 licence.
Let us know if you have any comments or suggestions about how to improve this definition, or would like to suggest and/or contribute additional terms to define.
Author: Charlie Pownall 🔗
Published: May 1, 2026
Last updated: May 1, 2026