AI, algorithmic and automation harms taxonomy
Project coordinator | AIAAIC
Research partners | ADAPT Centre, Trinity College Dublin; Datalab, Aristotle University of Thessaloniki; Department of Economics and Management, University of Pavia
AIAAIC is developing a human/user-centred taxonomy of harms driven by AI, algorithmic and automation systems to support citizen reporting, advocacy campaigns and violation tracking, education and "literacy" programmes and other purposes.
The taxonomy is being developed in an open, structured manner involving a diverse group of individuals from academia, civil society, government and business representing a broad range of interests and expertise, countries and cultures, ages and genders.
Objectives
AIAAIC's harms taxonomy project aims to equip end users, the general public, civil society organisations and others to:
Better understand the negative impacts (as opposed to "risks") of AI, algorithmic and automation systems
More effectively take action against the misuse of these systems
Apply pressure on the organisations and individuals developing and deploying these systems to be more transparent, open and accountable.
The taxonomy is intended to be:
Clear. Simple categories and definitions that are understandable to multiple audiences
Comprehensive. Reflective of a wide range of harms - though not necessarily exhaustive
Flexible. Adaptable to new harms as they arise
Interoperable. Can connect and communicate in real-time with other systems and organisations.
Audiences
The harms taxonomy is intended to be relevant to the following audiences:
End users, citizens, consumers and the general public
NGOs, journalists, teachers, researchers, and other civil society and academic entities
Policymakers and regulators
Business.
Approach
AIAAIC and its partners will use the following principles to inform the development of the project:
Transparent. About our objectives, processes, outcomes, etc
Collaborative. Meaningfully involving participants in taxonomy development and decision-making
Inclusive. Of a broad range of expertise, genders, ages, nationalities, races and ethnicities
Rigorous. Taking a structured, evidence-based approach to all aspects of the project.
Methodology
The project consists of three phases:
Phase 1 [complete]: Taxonomy design and development. Based on the analysis of third-party taxonomies and ontologies and the annotation of AIAAIC Repository entries representative of a broad range of harm scenarios.
Phase 2 [active]: High-level expert feedback. Taxonomy testing and refinement through in-depth interviews with a diverse set of publicly acknowledged experts with deep knowledge of key harm categories.
Phase 3: General public feedback. Additional testing and refinement of the technology in an open and collaborative manner by a diverse selection of non-expert end users, members of the general public, students and others.
Outputs
The outputs from the project are envisaged to be:
Taxonomy and definitions
Machine-readable version
Harms dataset
Research paper
White paper.
Participants (in a personal capacity) from:
AI Forensics
Amnesty International
BEUC
Business & Human Rights Resource Centre
BRAC University
Colombia University
Connected by data
Consumers International
DataGénero
Data & Society
Edinburgh University
EPIC
Green Web Foundation
Fundação Getulio Vargas
Heriot-Watt University
Homo Digitalis
Humans in the Loop
International Buddhist Studies College
Iuridicum Remedium
Latvijas Banka
Liberties
Metamorphosis
Microsoft Research India
Mila-Quebec
Northwestern University
Oxford Institute for Ethics, Law and Armed Conflict
Private Eye
San José State University
TechTonic Justice
Universidad Austral de Chile
Universidad Nacional Tres de Febrero
University of Michigan
University of the Philippines
Wikirate