AI, algorithmic, and automation risks and harms taxonomy
The creation of a general purpose, open taxonomy of AI, algorithmic, and automation risks and harms.
Participants from: ADAPT Centre, Trinity College Dublin; Agence France-Presse; Aristotle University of Thessaloniki; EPIC; Heriot-Watt University; Kairoi; Indian Institute of Technology Kanpur; McCombs School of Business; NYU Tandon School of Engineering; SciencesPo; The Markup; TU Wien; University of Michigan; University of Pavia; US Census Bureau; VW Group; Westminster Foundation for Democracy
The aims of AIAAIC's risks and harms taxonomy project is to:
The new taxonomy is intended to be:
Clear, understandable, usable, and practical
Applicable to multiple applications
Flexible, updateable, and extensible
The new taxonomy will be relevant to the following audiences:
Researchers, academics, NGOs, journalists, teachers, and other civil society entities
Policymakers, and regulators
End users, including consumers and citizens.
The new taxonomy will take an open-ended approach to identifying and defining risks and harms, whilst ensuring that important topics such as system governance, transparency, bias, privacy, safety, security, mis/disinformation, robotics and anthropomorphism, employment, and sustainability, are taken into account.
The new taxonomy will be developed as follows:
Identify and analyse existing standards and taxonomies to assess their applications, benefits, and limitations.
The outputs from the project are envisaged to be:
Taxonomy and definitions, including machine-readable version