Working paper
February 2025
Authors: Charlie Pownall, Maki Kanayama
Acknowledgements: Arthit Suriyawongkul, Athena Vakali, Charlie Collins, Costanza Bosone, Delaram Golpayegani, Djalel Benbouzid, Gavin Abercrombie, Julio Hernandes, Meem Manab, Nathalie Samaha, Paolo Giudici, Pierre Noro, Sophia Vei, Spencer Ames, Ushnish Sengupta
With AI and related technologies now central to the way many modern societies, economies and businesses operate, with legislation already in place or in the pipeline, it is essential that the users of the systems, and the people targeted and indirectly impacted by them, understand how they work, their limitations and the risks they pose, and how they are impacting their daily lives.
However, many people do not know if and when they are being identified, assessed, tracked or nudged by an AI or algorithmic system, have no idea how they work or what, and find it difficult to challenge decisions made by these systems - a lack of transparency, openness and accountability characteristic of too many automated systems and the individuals and organisations that design, develop and deploy them.
Unsurprisingly, research studies consistently show most people harbour real concerns about these technologies, and about their misuse.
This document sets out a topline examination of selected AI and algorithmic harm and risk taxonomies and associated research studies and databases from a human/user perspective, and is intended to advance the case for a more human-centred, transparent, open and accountable approach to AI and algorithmic systems.
It also informs the design and development of AIAAIC’s harms taxonomy and will feed into the development of the next version of the AIAAIC Repository of incidents driven by and associated with AI, algorithmic and automation systems.
The great majority of AI and algorithmic risk and harm taxonomies have been designed for use by developers and policymakers rather than end users and the general public.
Many taxonomies suffer from unclear categorisations and definitions and muddle risks and harms, potentially resulting in frustrated users and tangled policymaking.
Most taxonomies are not comprehensive, reflecting the technology/application (e.g. large language model) and topic or issue (e.g. health, privacy) focus of the people developing them, and the audience(s) they were developed for.
Most taxonomies lack practical design, resulting in static entities that are rarely updated, shared or available in downloadable or machine-readable formats, thereby restricting their use and information flow between systems and organisations.