AIAAIC is an independent, non-partisan, public interest initiative that examines and makes the case for real AI, algorithmic, and automation transparency and openness. More
AIAAIC's independent, free, open library identifies and assesses incidents and controversies driven by and relating to AI, algorithms, and automation.
AIAAIC publishes user survey 2023 results
AIAAIC's new transparency dashboard reveals operational metrics and performance
EPIC general counsel Ben Winters commends AIAAIC as 'making [a] substantial achievement[s] in transparency'
AIAAIC founder Charlie Pownall is quoted on the likelihood of international regulation of AI by Kyodo News
The Ada Lovelace Institute names the AIAAIC Repository a 'best practice' AI ethical review research open database
Ruttkamp-Bloem E. Intergenerational Justice as Driver for Responsible AI
Hao-Ping (Hank) Lee et al. Deepfakes, Phrenology, Surveillance, and More! A Taxonomy of AI Privacy Risks (pdf)
Ramírez Sánchez A.M., Bhatia I., Firmino Pinto S. Navigating Artificial Intelligence from a Human Rights Lens (pdf)
Zowghi D., da Rimini F., Diversity and Inclusion in Artificial Intelligence (pdf)
AIAAIC is working with researchers, academics, journalists, NGOs, and others to develop a general purpose, open taxonomy of AI, algorithmic, and automation risks and harms.
AIAAIC believes that AI, algorithms, and automation, and the organisations and individuals involved in their design, development, and deployment, must be transparent and honest about their aims and how they go about pursuing them. More
Transparency is cited as a core principle of ethical, responsible, and trustworthy AI. But it is often approached in a partial, piecemeal, and reactive manner.
Here's why real transparency and openness of AI, algorithmic and automation systems is needed, and what we believe it should look like.
AIAAIC content is available to use, copy, adapt, and redistribute under a CC BY-SA 4.0 licence. More