AI, algorithmic, and automation risks and harms taxonomy

The creation of a clear, useful, and accessible taxonomy of AI, algorithmic, and automation risks and harms.

Project start: June 2023
Status: Pipeline

Artificial intelligence, algorithms, and automation play an increasingly central role in the way governments, businesses and individuals operate. Yet a clear understanding of the risks and the harms of these technologies are hard to grasp when they are concealed within black boxes, people take them for granted, and new technologies emerge.

The AIAAIC Repository was constructed using a taxonomy that allowed for a flexible, evolutionary approach to defining the risks and harms of these technologies. However, a more structured approach is necessary given the speed with which they are developing and scaling, increasing public awareness and interest, and with dedicated regulation on the horizon.

Objectives

The development of a taxonomy of risks, harms, and impacts driven by and relating to AI, algorithms, and automation based on  real-world incidents and controversies indexed in the AIAAIC Repository. 

The new taxonomy is envisaged to be relevant to a broad range of stakeholders, notably researchers, policy-makers, users/customers/citizens, teachers/students, and developers, and will be open source.

Accordingly, the taxonomy is intended to be:

Methodology

The new taxonomy will be developed as follows: 

Private (limited to AIAAIC members and subscribers)


Public (open access) 

Outputs

The outputs from the project are envisaged to be:

Terms of use 

Anyone is welcome to use, copy, contribute to, share and adapt the AIAAIC Repository in line with its CC BY-SA 4.0 license. More

Further information

Contact AIAAIC