AI, algorithmic, and automation risks/harms taxonomy

Artificial intelligence, algorithmic, and automation systems are increasingly central to the everyday operation of government, business and society. However, despite a surge of incidents and controversies, heightened public awareness and interest, and the prospect of dedicated regulation, many of these systems remain opaque, and their risks and impacts difficult to understand.

Many existing taxonomies classify the risks and harms of these technologies primarily from a technological/technical perspective. However, the power and complexity of these systems, and the increasingly substantive and diffuse nature of their impacts, means an increasingly external perspective, or set of perspectives, are required if they are to be developed and operated in line with human interests and behaviours, and regulated appropriately.

The new, machine-readable taxonomy will underpin the next version of the AIAAIC Repository, and will be freely available to third parties.


AIAAIC's risks/harms taxonomy project aims to: 

The new taxonomy is intended to be:


The risks/harms taxonomy is intended to be primarily relevant to the following audiences


All AIAAIC initiatives are informed by the AIAAIC Manifesto, a set of ideals, principles, and guidelines for real AI and algorithmic transparency and openness. In the spirit of practising what it preaches, AIAAIC and its partners will use the following principles to inform the development of the new taxonomy:


The new taxonomy will take an open-ended approach to identifying and defining risks and harms, whilst ensuring that important topics such as system governance, transparency, bias, privacy, safety, security, mis/disinformation, robotics and anthropomorphism, employment, and sustainability, are taken into account.

The new taxonomy will be developed as follows: 

Private (limited to project partners, identified experts, and AIAAIC volunteers and members)

Public (open access) 


The outputs from the project are envisaged to be:

Further information

Contact AIAAIC