AIAAIC's "Ethical Issues Taxonomy" describes concerns posed by the inappropriate, unethical or illegal use or misuse of an AI, algorithmic and automation system, or set of systems, and/or their governance,
Applied to entries to the AIAAIC Repository, the taxonomy aims to present ethical concerns in a clear, succinct and “neutral” manner that is clear to and understandable by people from a wide variety of countries and cultures.
The harms associated with the use or misuse of AI, algorithmic and automation systems are set out in AIAAIC's External Harms Taxonomy.
Accessibility. The inability of the disabled, elderly, non-internet users and other disadvantaged and vulnerable people, to access and engage with a system at any time, without delay or downtime.
Accountability. The inability of users, researchers, lawyers and others to submit complaints and appeals, and to meaningfully investigate, evaluate, and hold the individuals and entities legally responsible and liable for its impacts.
Alignment. The extent to which a system, including its objectives and incentivisation, is seen to be in line with human values, ethics and needs, and is considered suitable and acceptable for the specific context and purpose in which it is deployed.
Anthropomorphism. The attribution of human traits, emotions, intentions, or behaviours to non-human entities such as AI and robotics systems by system designers, developers and operators, and/or by their users and others.
Appropriation. The use of cultural or community knowledge information or works, without acknowledgement of permission, in a disrespectful or exploitative manner.
Authenticity/integrity. The design, development and use of a system in a genuine and true manner, as opposed to its use for impersonation or other insincere or dishonest purposes.
Automation bias. The excessive trust and reliance on automated systems and decision support tools, often favouring their suggestions even when more accurate contradictory information is available from other sources.
Autonomous weapons. The design, development, and deployment of lethal autonomous weapons systems that can select and engage military and other human and non-human targets with little or no human control.
Autonomy/agency. The inability of the users and/or others to maintain control over their own decisions and independence.
Dual use. The design and development of a system for multiple purposes, or its misuse for purposes beyond its original stated purpose, including military use.
Employment. The use or misuse of a system or set of systems in the workplace, and the development and supply of such systems for employment-related purposes.
Environment. The management of a system’s actual and potential environmental impacts.
Fairness. The creation or amplification of unfair, prejudiced or discriminatory results due to biased data, poor governance or other factors.
Human/civil rights. A system’s ability to directly or indirectly erode or impair the human and civil rights and freedoms of a user, group of users, or others.
Mis/disinformation. Information and data that deceives - accidentally or deliberately - users, the general public, policymakers, and others.
Normalisation. The process - declared or otherwise - by which a system, or set of systems, shifts from being viewed as novel, disruptive, or ethically questionable to being seen as standard, routine, or essential.
Privacy/surveillance. The violation of personal privacy caused by the use or misuse of a system or set of systems, including mass surveillance systems.
Robot rights. The extent to which robots should be accorded rights that define how they should behave in relation to humans, and how humans should treat robots.
Safety. The physical, mental and psychological safety of users, animals and property posed by the use or misuse of a system and its governance.
Security. The protection of a system from breaches, leaks or unauthorised use in order to maintain the privacy and confidentiality of user data and information.
Transparency. The degree and manner in which a system and its governance, including its purpose and known risks and impacts, are clearly and accurately described and understandable to users, the general public, policymakers, and other stakeholders.
December 4, 2025: Added "Appropriation", "Normalisation"; Replaced "Bias/discrimination" with "Fairness"