A review of AI and algorithmic harm and risk taxonomies
from a human perspective

Working paper
February 2025

Authors: Charlie Pownall, Maki Kanayama
Acknowledgements: Arthit Suriyawongkul, Athena Vakali, Charlie Collins, Costanza Bosone, Delaram Golpayegani, Djalel Benbouzid, Gavin Abercrombie, Julio Hernandes, Meem Manab, Nathalie Samaha, Paolo Giudici, Pierre Noro, Sophia Vei, Spencer Ames, Ushnish Sengupta

Introduction

With AI and related technologies now central to the way many modern societies, economies and businesses operate, with legislation already in place or in the pipeline, it is essential that the users of the systems, and the people targeted and indirectly impacted by them, understand how they work, their limitations and the risks they pose, and how they are impacting their daily lives.

However, many people do not know if and when they are being identified, assessed, tracked or nudged by an AI or algorithmic system, have no idea how they work or what, and find it difficult to challenge decisions made by these systems - a lack of transparency, openness and accountability characteristic of too many automated systems and the individuals and organisations that design, develop and deploy them.

Unsurprisingly, research studies consistently show most people harbour real concerns about these technologies, and about their misuse.

This document sets out a topline examination of selected AI and algorithmic harm and risk taxonomies and associated research studies and databases from a human/user perspective, and is intended to advance the case for a more human-centred, transparent, open and accountable approach to AI and algorithmic systems.

It also informs the design and development of AIAAIC’s harms taxonomy and will feed into the development of the next version of the AIAAIC Repository of incidents driven by and associated with AI, algorithmic and automation systems.

Executive Summary