AIAAIC's "Ethical Issue Taxonomy" describes topics of ethical and moral concern posed by the use/misuse of an AI or automated system, and/or its governance.
The taxonomy aims to help researchers, civil society organisations, journalists, students, end users, victims and others to identify and understand the ethical dimensions and challenges posed by these technologies.
Applied to entries to the AIAAIC Repository, the taxonomy aims to present ethical concerns in a clear and succinct manner that is understandable to the general public. See our AI Ethics Glossary for more detailed explanations.
The Ethical Issue Taxonomy is available to third-parties to download, comment upon, update, and re-use in line with AIAAIC's terms of use.
Accessibility. The ability/inability of the disabled, elderly, non-internet users and other disadvantaged and vulnerable people, to access and engage with an AI/automated system at any time, without delay or downtime.
Accountability. The ability/inability of users, researchers, lawyers and others to submit complaints and appeals about an AI/automated system, and to meaningfully investigate, evaluate, and hold the individuals and entities legally responsible and liable for its impacts.
Accuracy & reliability. The extent to which an AI/automated system behaves dependably, accurately and consistently in the situation for which it is designed, or leads to low quality, inappropriate or harmful decisions, and the ethical consequences should it fail to do so.
Alignment. The extent to which an AI/automated system, including its objectives and incentivisation, is seen to be in line with human values, ethics and needs, and is considered suitable and acceptable for the specific context and purpose in which it is deployed.
Anthropomorphism. The attribution of human traits, emotions, intentions, or behaviours to non-human entities such as AI and robotics systems by system designers, developers and operators, and/or by their users and others.
Appropriation. The use/misuse of the cultural, intellectual, or symbolic information or works belonging to or associated with an individual or community, without acknowledgement or permission.
Authenticity & integrity. The design, development and use of an AI/automated system in a genuine and true manner, as opposed to deception, falsification, plagiarisation, misrepresentation, and other potentially harmful uses.
Automation bias. The excessive trust and reliance on AI/automated systems and decision support tools, often favouring their suggestions even when more accurate contradictory information is available from other sources.
Autonomous weapons. The design, development, and deployment of lethal autonomous weapons systems that can select and engage military and other human and non-human targets with little or no human control.
Autonomy/agency. The ability/inability of an AI/automation system's users or people impacted by it to maintain control over their own decisions and independence, and to the ethical risks that arise when that independence exceeds appropriate bounds or operates without adequate accountability.
Competition/monopolisation. The use/misuse of an AI/automated system that results in the actual or potential distortion of market dynamics, entrenchment of dominant players, or undermining of fair competition.
Consent. The ability/inability of the users of an AI/automated system, or people whose information, data or works are being used in such a system, to grant, manage, or revoke permission for it to be collected, processed, and shared.
Dual use. The design and development of an AI/automated system for multiple purposes, or its use/misuse for purposes beyond its original stated purpose, including military and criminal use.
Employment/labour. The use/misuse of an AI/automated system that replaces human jobs or changes working conditions in ways that create job loss, insecurity, or inequality.
Environment. The development, deployment, or operation of an AI/automated system in such a way that it damages the environment through excessive energy consumption, resource depletion, pollution, or other actions.
Fairness. The use/misuse of an AI/automated system to create or amplify unfair, prejudiced or discriminatory results due to biased data, poor governance, or other factors.
Human rights/civil liberties. The use/misuse of an AI/automated system to directly or indirectly erode or impair the human rights and civil freedoms of a user, group of users, or others.
Inclusivity. The principle that AI/automation systems should be designed, developed, and deployed in ways that fairly represent, respect, and serve all groups of people, especially those who are historically marginalissed or underrepresented.
Mis/disinformation. The use/misuse of an Ai/automated system to create and/or share information and data that deceives - accidentally or deliberately - the general public and others.
Normalisation. The process - declared or otherwise - by which an AI/automated system shifts from being viewed as novel, disruptive, or ethically questionable to being seen as standard, routine, or essential. Also, the use of the system to normalise inappropriate or unethical beliefs or behaviours.
Oversight. The processes, structures, and people responsible for monitoring, evaluating, and guiding how those systems are designed, deployed, and used.
Power inbalance. The disparity in influence, control, and information between the people who build, own or deploy an AI/automated system and the people who use it or who are targeted by it.
Privacy/surveillance. The violation of personal privacy caused by the use/misuse of an AI/automated system, including mass surveillance systems.
Proportionality. The necessity and suitability of an AI/automated system to achieve a specific legitimate purpose, resulting in overreach far exceeding its reliability or legitimacy, or in safeguards, oversight, and redress mechanisms wholly inadequate relative to the harm the system can cause.
Representation. The use/misuse of an AI/automated system to portray an individual, group, or idea in a manner that is misleading or untrue, and results in unfairness, harm, injustice, or the denial of dignity to the represented entity.
Revisionism. The use/misuse of an AI/automated system to change an established or accepted doctrine, policy, or historical view.
Robot rights. The granting of protections and moral considerations (such as fair treatment or limits on harm) to robots rights based on their capabilities or perceived autonomy, thereby defining how they should behave in relation to humans, and how humans should treat robots.
Safety. The physical, psychological and mental safety of users, animals and property posed by the use/misuse of an AI/automated system.
Security. The protection of an AI/automated system from breaches, leaks or unauthorised use in order to maintain the privacy and confidentiality of data and information.
Transparency. The degree and manner in which an AI/automated system, including its purpose, inner workings and known risks and impacts, are clearly and accurately described and understandable to users, the general public, policymakers, and other stakeholders - as opposed to communicated in a misleading, partial, or otherwise opaque manner.
December 5, 2025: Added "Appropriation" and "Normalisation"; Replaced "Bias/discrimination" with "Fairness"; Added "Revisionism"; Renamed "Human/civil rights" as "Human rights & civil liberties"; Removed "Liability"; Updated "Accountability" definition to include reference to liability
March 4, 2026: Added "Consent", "Power inbalance", and "Proportionality"
April 27, 2026: Added "Inclusivity", "Oversight"