AI, algorithmic, and automation risks/harms taxonomy
The creation of a general purpose, open taxonomy of AI, algorithmic, and automation risks and harms.
Status: ACTIVE
Partners: ADAPT Centre, Trinity College Dublin; Wikirate
Participants from: Agence France-Presse; Algoma University; Aristotle University of Thessaloniki; Heriot-Watt University; Government of India; Indian Institute of Technology Kanpur; McCombs School of Business; NYU Tandon School of Engineering; SciencesPo; The Markup; TU Wien; University of Cambridge; University of Maine School of Law; University of Michigan; University of Pavia; US Census Bureau; VW Group
Artificial intelligence, algorithmic, and automation systems are increasingly central to the everyday operation of government, business and society. However, despite a surge of incidents and controversies, heightened public awareness and interest, and the prospect of dedicated regulation, many of these systems remain opaque, and their risks and impacts difficult to understand.
Many existing taxonomies classify the risks and harms of these technologies primarily from a technological/technical perspective. However, the power and complexity of these systems, and the increasingly substantive and diffuse nature of their impacts, means an increasingly external perspective, or set of perspectives, are required if they are to be developed and operated in line with human interests and behaviours, and regulated appropriately.
The new, machine-readable taxonomy will underpin the next version of the AIAAIC Repository, and will be freely available to third parties.
Objectives
AIAAIC's risks/harms taxonomy project aims to:
Develop a general purpose, open taxonomy of near-term risks and harms of AI, algorithms, and automation
Facilitate responsible/ethical ‘outside-in’ analysis; educate/increase awareness and understanding; and help make the case for greater transparency and openness.
The new taxonomy is intended to be:
Simple, clear, and understandable
Usable and practical
Applicable to multiple technologies and applications
Flexible, updateable, and extensible
Machine-readable.
Audiences
The risks/harms taxonomy is intended to be primarily relevant to the following audiences:
Researchers, NGOs, journalists, teachers, and other civil society entities
Policymakers, and regulators
Customers, consumers, and the general public.
Approach
All AIAAIC initiatives are informed by the AIAAIC Manifesto, a set of ideals, principles, and guidelines for real AI and algorithmic transparency and openness. In the spirit of practising what it preaches, AIAAIC and its partners will use the following principles to inform the development of the new taxonomy:
Outside-in. With a focus on governance and individual, societal, and environmental risks and harms, the new taxonomy will enable civil society organisations, business, policymakers, regulators, civil society organisations, businesses and end users to understand the broad implications of their needs and behaviours and be in a better position to manage and mitigate the risks of these increasingly complex and consequential technologies.
Transparent. The new taxonomy will be developed with the support of experts, and then in the open, with the general public and others invited to review and recommend changes to it upon launch.
Open. The new taxonomy project aims to ensure all stakeholders, including civil society organisations and end users, have access to meaningful information and data. The new taxonomy will be open source, enabling third-parties to incorporate and build upon it as they see fit.
Standards-based. The new taxonomy will use a baseline the terms and definitions, principles, and guidelines set out in the proposed EU AI Act, US NIST AI Risk Management Framework, the OECD Framework for the Classification of AI systems, and other key documents.
Methodology
The new taxonomy will take an open-ended approach to identifying and defining risks and harms, whilst ensuring that important topics such as system governance, transparency, bias, privacy, safety, security, mis/disinformation, robotics and anthropomorphism, employment, and sustainability, are taken into account.
The new taxonomy will be developed as follows:
Private (limited to project partners, identified experts, and AIAAIC volunteers and members)
Identify and analyse existing standards and taxonomies to assess their applications, benefits, and limitations.
Identify and define AI, algorithmic, and automation technologies, risks, and harms.
Develop new taxonomy and definitions by structuring identified risks and harms in a hierarchical format.
Public (open access)
Review and evaluate the accuracy, validity, and comprehensiveness of the revised taxonomy.
Create and publish taxonomy and definitions, including machine-readable version.
Optimise the taxonomy continuously as new risks and harms emerge.
Outputs
The outputs from the project are envisaged to be:
Taxonomy and definitions, including machine-readable version
Research paper
White paper
Press release/op-ed, etc.