AIAAIC is an independent, non-partisan, public interest initiative that examines and makes the case for real AI, algorithmic, and automation transparency and openness. More
AIAAIC's independent, free, open library identifies and assesses incidents and controversies driven by and relating to AI, algorithms, and automation.
AIAAIC records 917,000 active users in 2023
AIAAIC is cited in testimony to US Commission on Civil Rights briefing on government use of facial recognition
AIAAIC is cited in Apple, Disney shareholder AI transparency report proposals
AIAAIC founder Charlie Pownall is quoted on the likelihood of international regulation of AI by Kyodo News
The Ada Lovelace Institute names the AIAAIC Repository a 'best practice' AI ethical review research open database
UK Competition & Markets Authority. AI Foundation Models - Technical update report
Ruttkamp-Bloem E. Intergenerational Justice as Driver for Responsible AI
Hutiri W., Papakriakoplous O., Xiang A. Not My Voice! A Taxonomy of Ethical and Safety Harms of Speech Generators (pdf)
Hao-Ping (Hank) Lee et al. Deepfakes, Phrenology, Surveillance, and More! A Taxonomy of AI Privacy Risks (pdf)
AIAAIC is working with researchers, academics, journalists, NGOs, and others to develop a general purpose, open taxonomy of AI, algorithmic, and automation risks and harms.
AIAAIC believes that AI, algorithms, and automation, and the organisations and individuals involved in their design, development, and deployment, must be transparent and honest about their aims and how they go about pursuing them. More
Transparency is cited as a core principle of ethical, responsible, and trustworthy AI. But it is often approached in a partial, piecemeal, and reactive manner.
Here's why real transparency and openness of AI, algorithmic and automation systems is needed, and what we believe it should look like.
AIAAIC content is available to use, copy, adapt, and redistribute under a CC BY-SA 4.0 licence. More