AI, algorithmic and automation
incident and controversy repository

The AI, algorithmic and automation incident and controversy (AIAAIC) repository is an independent, free, open resource exploring the limitations, consequences and risks of AI, algorithms and automation.

The most comprehensive, detailed and up-to-date resource of its kind, the repository details 650+ incidents and controversies driven by and relating to AI, algorithms and automation since 2012.

The AIAAIC repository is intended to be relevant and useful to researchers, academics, NGOs, policy makers, regulators, business managers, risk managers, lawyers, and reputation managers.

The repository can be used to:

  • Inform and support analysis and commentary; conduct qualitative or quantitative research; develop case studies; devise training and education programmes; develop risk and reputation management frameworks, methodologies and other tools; and predict future trends.

Examples of its use include:

  • Partnership on AI - AI incident database; Responsible AI Institute - Map of helpful and harmful AI; ETAMI - development of a risk-based classification system for AI applications; We and AI - research study on the personal risks of AI; Asia-based healthcare company - incident and crisis plan development; UK-based law firm - development of an AI legal case library.

AI, algorithmic & automation incident & controversy [AIAAIC] repository

How to use the AIAAIC repository

The AIAAIC repository uses Google Sheets.

Please make a copy of the repository if you want to filter entries.

You can search the repository by using CTRL+F (Windows) or cmd+F (Apple).

For more on how to use Google Sheets, see: https://support.google.com/docs/topic/9054603?hl=en&ref_topic=1382883

Terms of use

You may use, copy, redistribute and adapt the contents of the AIAAIC repository in line with its CC by 4.0 licence.

When doing so, ensure you attribute 'AIAAIC' and provide a clear, prominent link back to this page: http://aiaaic.org