AIAAIC is an independent, non-partisan, public interest initiative that examines and makes the case for real AI, algorithmic, and automation transparency and openness.
AIAAIC's independent, free, open library identifies and assesses 900+ incidents and controversies driven by and relating to AI, algorithms and automation.
The Ada Lovelace Institute names the AIAAIC Repository a 'best practice' AI ethical review research open database
Forrester Research study draws on AIAAIC Repository to set out how to build confidence and trust in AI decision-making
University of Cambridge/Data & Society Research Institute study praises the AIAAIC Repository for 'painstakingly ... establish[ing] the significant scale of algorithmic harms' across the world
The European Commission's AI Watch cites the AIAAIC Repository in its AI in Public Services report
AIAAIC founder Charlie Pownall discusses the trustworthiness of black box systems with IEEE Spectrum
Crowdsourced collection of current and proposed laws mandating the transparency of AI and algorithmic systems
Review of current government and company AI and algorithmic transparency models, types, and forms
AIAAIC believes that AI, algorithms, and automation, and the organisations and individuals involved in their design, development, and deployment, must be transparent and honest about their aims and how they go about pursuing them. More
Transparency is cited as a core principle of ethical, responsible and trustworthy AI. But it is often approached in a partial, piecemeal, and reactive manner.
Here's why real transparency and openness of AI, algorithmic and automation systems is needed, and what it should look like.
AIAAIC content is available to use, copy, adapt, and redistribute under a CC BY-SA 4.0 license. More