The AIAAIC Repository (standing for 'AI, Algorithmic and Automation Incidents and Controversies') is an independent, open, public interest resource that details incidents and controversies driven by and relating to AI, algorithms and automation.
Part dataset, part Wikipedia-style knowledge graph, the Repository forms part of a broader initiative to make AI, algorithmic and automation systems more transparent and open, and ultimately more accountable.
The AIAAIC Repository is used primarily by researchers, educators and students, journalists, non-profit organisations and campaign groups, think tanks and policymakers interested in and concerned about the negative impacts of AI and related technologies.
It is also used by system designers, developers, deployers and management, technology, legal and other advisors, though these are not its focus.
The AIAAIC Repository is intended to provide a snapshot of the issues and harms associated with negative events - actual and potential - driven by and relating to AI and related technologies and applications.
Whilst some other open/public databases and registers take a narrow "safety"-oriented technical view of technology systems, it is clear that non-technical factors play important roles in driving AI, algorithmic and automation incidents and controversies.
These include issues such as: Job displacements/losses; Environmental damage; Misleading/hyped marketing; Anthropomorphism; and Unethical data use such as copyright violations - which may become public concerns. These many be triggered by one or more of: User and/or general public complaints; Research studies; Advocacy research reports/campaigns; Media investigations; Legal threats and complaints; and Whistleblower reports.
By taking an ‘outside-in’ approach, the AIAAIC Repository enables you to view and analyse the external risks and harms to individuals, communities, society and the environment through an external and ethical prism - as opposed to technical vulnerabilities and governance.
The AIAAIC team takes care to classify and publish data and information about AI, algorithmic and automation technologies in a clear, concise, objective and balanced manner. But it is an unapologetically human effort and therefore not free of unintentional biases or interests.
And while the AIAAIC Repository strives to be broad-based in terms of the variety of incidents covered and its geographical coverage, it should not be considered comprehensive and contains gaps, including entries without written summaries.
The Repository also suffers from inaccuracies, inconsistencies and unclear nomenclature. These are in the process of being improved, with substantive updates detailed on the Changelog.
The Repository also resides on two platforms, making it difficult to sort entries or access impact data on the website, read an incident summary on the spreadsheet, or get a single view of an incident. This will be rectified in due course.
The AIAAIC Repository enables you to identify, evaluate and act upon the risks and harms of AI, algorithms and automation from an ethical and ‘responsible’ perspective.
The Repository can be used for a variety of purposes, notably as a reference tool, for qualitative and quantitative research and investigation, to inform governmental and company policy-making, and to develop training and educational materials and campaigns.
The Repository is used in a variety of ways; it largely depends on what you want to use it for.
For example, if you want to use it to identify incidents and examples relevant to your interest or work, you can download the spreadsheet and sort the relevant fields (for example, the Sector field to identify all entries relevant to ‘Copyright’ or ‘Consumer goods’).
Equally, you can use the website search engine or search feature, making sure you include relevant search terms (on which, this Google Search operator guide may help).
You can also use the System pages and Dataset pages on the website to learn more about a single System or Dataset, including a breakdown of its risks and harms, transparency and accountability limitations, and incidents and issues associated with it.
The Repository is designed to help with the following kinds of questions:
What forms do AI, algorithmic and automation incidents take? What are their differences and similarities?
Which AI, algorithmic and automation technologies and applications are most likely to result in incidents or controversies?
Which sectors and countries are most exposed to AI incidents and controversies?
What kinds of harms do AI, algorithmic and automation systems cause?
What are the principal triggers that turn an AI or algorithmically-driven incident or controversy into a public incident or controversy?
Which kinds of incidents and controversies result from inadequate, poor, inaccurate or misleading AI transparency and openness?
Amongst others, AIAAIC data and content has been used by:
A US-based union and its law firm in support of shareholder transparency campaigns against several high-profile technology companies
An investigative journalist at a leading newswire service looking into possible defence sector AI misuses
An expert witness in testimony to a US Commission on Civil Rights briefing on government use of facial recognition
Stanford University researchers developing its annual AI Index Report
The International Committee of the Red Cross when considering the limitations and societal impacts of AI systems
Researchers exploring the limitations of corporate AI governance
A French journalist looking to identify the developers of an AI nudification app
A UK regulator considering how to structure a violations database
A researcher helping to create an AI taxonomy for the European Commission
A west coast US university professor developing a computing science course
A global bank writing about the multi-faceted nature of AI risks
Indonesian and Brazilian journalists developing multi-media stories on the rise and nature of AI incidents.
We recommend that the Repository is used carefully when treated as a source in itself, especially when used for quantitative analysis (see limitations above).
We request that the Repository data and information is used ”substantially” in the public interest - as set out in the UK Data Protection Act 2018 - rather than copied, adapted or re-distributed for commercial gain.
All entries to the AIAAIC Repository are manually compiled by our team. Other than where stated (example, example), the Repository does not draw on or incorporate third-party data.
How are incidents identified and verified?
A six-step process is used to identify, assess and publish entries to the AIAAIC Repository:
Detect. Repository entries are identified in a number of ways, including being submitted for consideration by third-parties via our website, highlighted by our members of our community, or shared with us via email or social media. Entries are also identified through web searches and using Google Alerts.
Assess. A variety of predominantly qualitative criteria are used to determine whether an entry should be added to the AIAAIC Repository. Not all these criteria (eg. volume) need to be satisfied for an incident report to qualify.
Relevance. Entries must be ‘driven by’ or ‘relating to’ AI, algorithms and automation (see Classifications and definitions and Exclusions).
Impact(s). Entries to the repository should have resulted in – or be seen to have resulted in – actual harm (see Classifications and definitions) to individuals/end users, society, and/or the environment (ie. external impacts). The governance and technological system of those operating, commissioning, or developing it in the form of strategic/reputational, operational, financial, and legal harms (ie. internal impacts) are also considered, though these are secondary considerations. Equally, the potential harm(s) of a system may have been highlighted by a third-party or by an employee or whistleblower, in which case the report is listed under ‘Issue’.
Credibility. Unless the individual(s) or organisation(s) involved in social media-based discussions are known to be highly credible and trustworthy, an incident or issue should have been covered by the mainstream media (international, national, business, science, technology, etc) as opposed to running solely on social media, in legal dockets and/or research reports.
Volume. The more an incident or issue is covered by the mainstream media, the more likely it is to be included in the AIAAIC Repository.
However, this need not always be the case: an incident may have been widely covered but have little substance; equally, an incident may draw little attention but have a significant impact on society or the system under scrutiny.
Classify. Entries are classified in line with AIAAIC's Classifications and definitions.
Summarise. A short, written summary (example) of the entry.
Approve. The assessment of the entry for accuracy, fairness, balance and legality, and its approval, by the Managing editor.
Publish. The publication of the entry on the AIAAIC Repository website and Google sheet.
The AIAAIC Repository (sheet and website) is managed in line with these Classifications and definitions.
Please note that the current taxonomy is non-hierarchical, non-exhaustive, and contains overlaps.
The current classifications and definitions will shortly be updated.
The AIAAIC Repository is usually updated daily.
The Repository is updated and maintained by an editorial team consisting of the Managing editor and several Contributors, each of whom brings experience and expertise on a specific topic or set of topics, such copyright, diffusion models and algorithmic bias.
The editorial team is managed by Charlie Pownall, the Managing editor.
The AIAAIC Repository is the product of a grassroots, community effort. As such, we welcome people from diverse backgrounds across the world willing and able to help us detect, evaluate and document the nature and harms of AI, algorithmic and automation systems in an evidence-based, fair and balanced manner.
The selection and approval of editorial team members ('Contributors') takes into account an individual’s background, interests, employment, affiliations, and other criteria. Training is provided to members of our editorial team.
Find out about volunteering opportunities at AIAAIC.
Incidents or controversies should be reported using our Incident/controversy report form. We also regularly receive tips via email and through our social media profiles on X/Twitter and LinkedIn.
Please note that we are particularly interested in incidents that result in actual demonstrable harm, or in which the opacity (ie. lack of transparency) of the technology system and/or its broader governance (all the way up to the Board of Directors, investors and owners) contributed or are seen to have contributed to a controversy or backlash.
Irrelevant, misleading or unfair reports, or those including information not already in the public domain, are not added.
All reports are handled in strict confidence.
Premium Membership provides you with free access to hidden data in the Google sheet version of the AIAAIC Repository, notably impacts and collections. Collections include Tesla Autopilot fatalities, Grok chatbot incidents, and Political misuse monitor. Please note that Premium Membership requires your understanding of and adherence to AIAAIC's Terms of use and Code of Conduct.
Please email us at info [at] aiaaic.org and your membership will be cancelled.
Please note your personal data will be deleted from our system two years after cancellation of your Premium Membership.
AIAAIC’s Terms of use cover third-party use of AIAAIC intellectual property and copyright and the accuracy of our information and data. The AIAAIC Repository is intended to be used ”substantially” in the public interest - as set out in the UK Data Protection Act 2018.
Accordingly, AIAAIC data or content should not be adapted or re-distributed for commercial purposes - such as developing risk management or text analysis products or services. AIAAIC Repository data or content should also be republished under the same CC BY-SA 4.0 licence when adapted, remixed or redistributed.
Individuals and organisations that use AIAAIC data beyond the scope of our terms of use first receive a private warning to take down and delete the data. If they ignore this warning, they will be banned from AIAAIC Premium Membership, receive a public warning and have their names listed on AIAAIC’s website. Please make sure you understand our Terms of use before using the Repository and get in touch if you have any questions.
You are welcome to mention the AIAAIC Repository in personal, media and business articles, commentary and analysis. Please attribute the ‘AIAAIC Repository’ and provide a clear, prominent link to the original URL of the specific page you are referencing.
We recommend that the AIAAIC Repository is used carefully when treated as a source in itself, especially when used for quantitative analysis. Should you wish to cite the AIAAIC Repository - when talking about the AIAAIC Repository itself, for instance - always make sure to link to the original URL of the specific page you are referencing and attribute 'AIAAIC' or ‘AIAAIC Repository’ with a clear, prominent link.
You can use a number of academic formats:
APA style. ChatGPT details how to make homemade bombs. (n.d.). In AIAAIC Repository. Retrieved Sept. 20, 2024, from https://www.aiaaic.org/aiaaic-repository/ai-algorithmic-and-automation-incidents/chatgpt-details-how-to-make-homemade-bombs
Bluebook. ChatGPT details how to make homemade bombs, https://www.aiaaic.org/aiaaic-repository/ai-algorithmic-and-automation-incidents/chatgpt-details-how-to-make-homemade-bombs (last visited Sept. 30, 2024).
Chicago style. AIAAIC Repository, s.v. "ChatGPT details how to make homemade bombs," (accessed Sept. 20, 2024), https://www.aiaaic.org/aiaaic-repository/ai-algorithmic-and-automation-incidents/chatgpt-details-how-to-make-homemade-bombs
If you wish to request that AIAAIC data or content is amended or removed, you may file a complaint by emailing info [at] aiaaic.org
Please include the following in your report:
Your name and relationship with AIAAIC (for example, if you are a Premium Member)
Your contact details
Details of the problem, including what you feel is inaccurate, misleading or unfair and what impact is having, or is likely to have, on you, your organisation and/or your stakeholders
What kind of remedy (for example: update, formal correction or removal) you feel is appropriate.
All reports will be reviewed and investigated quickly and thoroughly, and are treated as confidential.
Last updated: December 31, 2024