AN INDEPENDENT, GRASSROOTS PUBLIC INTEREST INITIATIVE WORKING FOR TRANSPARENT, OPEN AND ACCOUNTABLE TECHNOLOGY
AN INDEPENDENT, GRASSROOTS PUBLIC INTEREST INITIATIVE WORKING FOR TRANSPARENT, OPEN AND ACCOUNTABLE TECHNOLOGY
Featured system 🤖
Perplexity AI's so-called "answer engine" has been touted as a Google killer. Critics argue it constitutes an unethical attempt to build a useful if flawed product on the backs of third-party content owners and others.
Featured dataset 🔢
Google and Meta large language models were trained on racist, pornographic and copyright-protected web content, raising safety, privacy, and other concerns.
Get to grips with incidents and controversies driven by and associated with AI and related technologies, and the implications of the technology and governance systems behind them for individuals, society, and the environment.
Equipping researchers, civil society organisations and the general public to better understand and take action on AI and related technology harms and violations.
AIAAIC publishes News Trigger, Ethical Issue, External Harm, and Harm Consequence taxonomies
AIAAIC publishes AI harm and risk taxonomy review working paper
AIAAIC publishes AI and algorithmic harms taxonomy v1.8
AIAAIC updates AIAAIC Repository user guide
AIAAIC publishes Values and Code of Conduct
Brazil's Nucleo charts AI incidents using AIAAIC data
AIAAIC cited in ACLU/EPIC/EFF response to trade secrecy protections under California's proposed Risk Assessments and Automated Decisionmaking Technology Regulations
AIAAIC cited as an 'Authority' (pdf) in US Supreme Court mistaken identity case
AIAAIC cited in testimony to US Commission on Civil Rights briefing on government use of facial recognition
AIAAIC cited in Apple (pdf), Disney (pdf) and Paramount (pdf) shareholder AI transparency report proposals
ICAAD, King & Wood Mallesons. Artificial Intelligence Harm and Human Rights
Xiao, S., et al. What Comes After Harm? Mapping Reparative Actions in AI Through Justice Frameworks
Venkatasubramanian K., Hardie H., Ranalli T-M. Toward a taxonomy of negative outcomes from the use of AI-driven systems for people with disabilities
Rao PSB., Šćepanović S., Jayagopi D.B., Cherubini M., Quercia D. The AI Model Risk Catalog: What Developers and Researchers Miss About Real-World AI Harms
Hadan H. et al. Who is Responsible When AI Fails? Mapping Causes, Entities, and Consequences of AI Privacy and Ethical Incidents
AIAAIC content is available to use, copy, adapt, and redistribute under a CC BY-SA 4.0 licence. More