WORKING FOR MORE TRANSPARENT, OPEN AND ACCOUNTABLE TECHNOLOGY
Get to grips with incidents and controversies driven by and associated with AI and related technologies, and the implications of the technology and governance systems behind them for individuals, society and the environment.
Equipping researchers, civil society organisations and the general public to better understand and take action on AI and related technology harms and violations.
REPORT INCIDENT π₯ | ACCESS DATABASE π’ | PREMIUM MEMBERSHIP π
AIAAIC publishes working paper reviewing AI and algorithmic risk and harm taxonomies
AIAAIC calls for participation in classifying AI and algorithmic harms
AIAAIC publishes community code of conduct
AIAAIC is cited in testimony to US Commission on Civil Rights briefing on government use of facial recognition
AIAAIC is cited in Apple, Disney and Paramount shareholder AI transparency report proposals
Koster P. Amicus Brief for US Supreme Court (pdf)
Bengio Y. et al. International AI Safety Report (pdf)
Bracket Foundation, Value for Good GmbH. Generative AI. A New Threat For Online Child Sexual Exploitation and Abuse
International Monetary Fund. Global Financial Stability Report - Advances in Artificial Intelligence: Implications for Capital Market Activities
Geneva Academy/ICRC. Expert Consultation Report on AI and Related Technologies in Military Decision-Making on the Use of Force in Armed Conflicts
AIAAIC is an independent, non-partisan, grassroots public interest initiative that examines and makes the case for real AI, algorithmic and automation transparency, openness and accountability. More
Transparency is cited as a core principle of ethical, responsible and trustworthy AI. But it is often approached in a partial, piecemeal and reactive manner.
Here's why real, meaningful transparency and openness of AI, algorithmic and automation systems is needed, and what we believe it should look like.
AIAAIC content is available to use, copy, adapt, and redistribute under a CC BY-SA 4.0 licence. More