AI/automation ethics glossary
Power inbalance
AI/automation ethics glossary
Power inbalance
Power inbalance refers to the disparity in influence, control, and information between the entities - typically large corporations, governments, or platform owners - who build, own or deploy an automated system and the people who use or are targeted by it.
Power imbalance manifests across multiple dimensions:
Informational asymmetry. AI systems harvest vast quantities of personal data from users and workers, while those individuals have little visibility into what is collected, how it is used, or what inferences are drawn. AI surveillance in the workplace, for example, draws statistical inferences about workers from their data to inform managerial decisions, commodifying workers into statistical entities and objectifying human work as a series of data points for algorithmic analysis.
Decisional asymmetry. Algorithmic systems increasingly make or heavily influence consequential decisions about people (eg. hiring, credit, bail, benefits, health) without those people having a meaningful right to challenge, understand, or contest the outcome.
Economic asymmetry. The financial gains from AI-driven automation flow disproportionately to those who own and deploy the technology, while cost and risk are externalised to workers and communities.
Political and institutional asymmetry. A handful of companies control the dominant AI platforms, models, and cloud infrastructure, giving them outsized influence over public discourse, market access, and even government policy.
State versus citizen asymmetry. Governments are deploying AI-powered surveillance at a scale that individuals and civil society organisations are structurally ill-equipped to resist or even monitor.
Power imbalance is foundational to almost every other AI ethics issue because it determines who benefits from AI and who bears the risks and suffers from the consequences.
The breakdown of ethical norms surrounding power distribution can have significant societal costs:
Democratic accountability weakens: decisions that affect millions of people are made by systems or organisations beyond effective public oversight.
Inequality deepens: AI amplifies existing structural disadvantages, concentrating opportunity among those who already have resources, skills, and access.
Individual agency erodes: people lose meaningful control over decisions that shape their employment, health, freedom of movement, and access to services.
Market competition suffers: smaller businesses, countries, and civic actors cannot effectively participate in or challenge AI-driven markets dominated by a few global players.
Trust collapses: populations that repeatedly experience AI as a tool used against them, rather than for them, disengage from democratic institutions and technology altogether.
Market concentration: A small number of companies control the compute, data, and talent needed to build frontier AI systems.
Data monocultures. Those who control the most data can build the most capable systems, reinforcing their own dominance.
Opacity by design. Complex, proprietary models are structurally resistant to external scrutiny or challenge.
Regulatory gaps. Legislation has not kept pace with the speed and scale of AI deployment, leaving power imbalances largely unchecked.
Workforce dependency. Workers, gig economy participants, and platform users often have no viable alternative to accepting algorithmic management as a condition of employment or income.
State–corporate entanglement. Governments increasingly depend on private AI companies for critical infrastructure, creating conflicts of interest that limit oversight.
Global digital divide. Poorer nations lack the infrastructure, talent, and regulatory capacity to participate in or constrain global AI development on equal terms.
The inbalance of power raises important ethical questions:
Security versus liberty. governments argue that AI-powered surveillance is necessary for national security and public safety. But the same capabilities enable authoritarian control, political targeting, and the suppression of legitimate dissent - a trade-off populations rarely get to meaningfully weigh in on.
Innovation versus monopoly. allowing leading AI firms to operate with minimal constraint may accelerate beneficial innovation, but it also entrenches structural power imbalances that become progressively harder to reverse.
Personalisation versus manipulation. AI systems that learn deeply about individuals can be used either to serve them better, or to exploit their vulnerabilities, biases, and preferences for commercial or political gain.
Consent in asymmetric relationships. Individuals formally "consent" to data collection and algorithmic decision-making through terms of service, but where there is no practical alternative, such consent may be meaningless.
You are welcome to use, copy, adapt, and redistribute this definition under a CC BY-SA 4.0 licence.
Let us know if you have any comments or suggestions about how to improve this definition, or would like to suggest and/or contribute additional terms to define.
Author: Charlie Pownall 🔗
Published: May 6, 2025
Last updated: May 6, 2025