AI, algorithmic, and automation risks and harms taxonomy
The creation of a general purpose, open taxonomy of AI, algorithmic, and automation risks and harms.
Status: ACTIVE
Participants from: ADAPT Centre, Trinity College Dublin; Agence France-Presse; Aristotle University of Thessaloniki; EPIC; Heriot-Watt University; Kairoi; Indian Institute of Technology Kanpur; McCombs School of Business; NYU Tandon School of Engineering; SciencesPo; The Markup; TU Wien; University of Michigan; University of Pavia; US Census Bureau; VW Group; Westminster Foundation for Democracy
Artificial intelligence, algorithmic, and automation systems are increasingly central to the everyday operation of government, business and society. However, despite a surge of incidents and controversies, heightened public awareness and interest, and the prospect of dedicated regulation, many of these systems remain opaque, and their risks and impacts difficult to understand.
Many existing taxonomies classify the risks and harms of these technologies primarily from a technological/technical perspective. However, the power and complexity of these systems, and the increasingly substantive and diffuse nature of their impacts, means an increasingly external perspective, or set of perspectives, are required if they are to be developed and operated in line with human interests and behaviours, and regulated appropriately.
The new, machine-readable taxonomy will underpin the next version of the AIAAIC Repository, and will be freely available to third parties.
Objectives
The aims of AIAAIC's risks and harms taxonomy project is to:
Develop a general purpose, open taxonomy of near-term risks and harms of AI, algorithms, and automation
Facilitate responsible/ethical ‘outside-in’ analysis
Educate/increase awareness and understanding
Help make the case for greater transparency and openness.
The new taxonomy is intended to be:
Clear, understandable, usable, and practical
Applicable to multiple applications
Flexible, updateable, and extensible
Machine-readable.
Audiences
The new taxonomy will be relevant to the following audiences:
Researchers, academics, NGOs, journalists, teachers, and other civil society entities
Policymakers, and regulators
End users, including consumers and citizens.
Approach
All AIAAIC initiatives are informed by the AIAAIC Manifesto, a set of ideals, principles, and guidelines for real AI and algorithmic transparency and openness. In the spirit of practising what it preaches, the AIAAIC and its partners will use the following principles to inform the development of the new taxonomy:
Outside-in. With a focus on governance and individual, societal, and environmental risks and harms, the new taxonomy will enable civil society organisations, business, policymakers, regulators, civil society organisations, businesses and end users to understand the broad implications of their needs and behaviours and be in a better position to manage and mitigate the risks of these increasingly complex and consequential technologies.
Transparent. The new taxonomy will be developed with the support of experts, and then in the open, with the general public and others invited to review and recommend changes to it upon launch.
Open. The new taxonomy project aims to ensure all stakeholders, including civil society organisations and end users, have access to meaningful information and data. The new taxonomy will be open source, enabling third-parties to incorporate and build upon it as they see fit.
Standards-based. The new taxonomy will use a baseline the terms and definitions, principles, and guidelines set out in the proposed EU AI Act, US NIST AI Risk Management Framework, the OECD Framework for the Classification of AI systems, and other key documents.
Methodology
The new taxonomy will take an open-ended approach to identifying and defining risks and harms, whilst ensuring that important topics such as system governance, transparency, bias, privacy, safety, security, mis/disinformation, robotics and anthropomorphism, employment, and sustainability, are taken into account.
The new taxonomy will be developed as follows:
Private (limited to project partners, identified experts, and AIAAIC volunteers and members)
Identify and analyse existing standards and taxonomies to assess their applications, benefits, and limitations.
Identify and define AI, algorithmic, and automation technologies, risks, and harms.
Develop new taxonomy and definitions by structuring identified risks and harms in a hierarchical format.
Public (open access)
Review and evaluate the accuracy, validity, and comprehensiveness of the revised taxonomy.
Create and publish taxonomy and definitions, including machine-readable version.
Optimise the taxonomy continuously as new risks and harms emerge.
Outputs
The outputs from the project are envisaged to be:
Taxonomy and definitions, including machine-readable version
Research paper
White paper
Press release(s)/op-ed(s).