AIAAIC is an independent, non-partisan, public interest initiative that examines and makes the case for real AI, algorithmic, and automation transparency and openness.
AIAAIC's independent, free, open library identifies and assesses 1,000+ incidents and controversies driven by and relating to AI, algorithms, and automation.
AIAAIC founder Charlie Pownall is quoted on the likelihood of international regulation of AI by Kyodo News
Stanford HAI publishes headline AIAAIC Repository data in its 2023 AI Index Report
NIST AI Risk Management Framework references (pdf) the AIAAIC Repository as a safety, validity, and reliability resource
The Ada Lovelace Institute names the AIAAIC Repository a 'best practice' AI ethical review research open database
Forrester Research study draws on AIAAIC Repository to set out how to build confidence and trust in AI decision-making
Creation of a practical, accessible, and extensible taxonomy of AI, algorithmic, and automation risks and harms
Crowdsourced collection of current and proposed laws mandating the transparency of AI and algorithmic systems
AIAAIC believes that AI, algorithms, and automation, and the organisations and individuals involved in their design, development, and deployment, must be transparent and honest about their aims and how they go about pursuing them. More
Transparency is cited as a core principle of ethical, responsible, and trustworthy AI. But it is often approached in a partial, piecemeal, and reactive manner.
Here's why real transparency and openness of AI, algorithmic and automation systems is needed, and what it should look like.
AIAAIC content is available to use, copy, adapt, and redistribute under a CC BY-SA 4.0 license. More