AN INDEPENDENT PUBLIC INTEREST INITIATIVE WORKING FOR TRANSPARENT, OPEN AND ACCOUNTABLE TECHNOLOGY
AN INDEPENDENT PUBLIC INTEREST INITIATIVE WORKING FOR TRANSPARENT, OPEN AND ACCOUNTABLE TECHNOLOGY
Featured system 🤖
The post-acute care algorithmic system has raised significant concerns regarding its use and impact on patients and the US healthcare system.
Featured dataset 🔢
The notorious "shadow library" of pirated books allegedly used by Anthropic, Meta and others to train their large language models and generative AI systems.
Get to grips with incidents and controversies driven by and associated with AI and related technologies, and the implications of the technology and governance systems behind them for individuals, society, and the environment.
Equipping researchers, civil society organisations and the general public to better understand and take action on AI and related technology harms and violations.
AIAAIC publishes AI and algorithmic harm and risk taxonomy review working paper
AIAAIC publishes AI, algorithmic and automation harms taxonomy v1.8
AIAAIC launches Slack channel for community members
AIAAIC updates AIAAIC Repository user guide
AIAAIC publishes Values and Code of Conduct
Brazil's Nucleo charts AI incidents using AIAAIC data
AIAAIC cited in ACLU/EPIC/EFF response to trade secrecy protections under California's proposed Risk Assessments and Automated Decisionmaking Technology Regulations
AIAAIC cited as an 'Authority' (pdf) in US Supreme Court mistaken identity case
AIAAIC cited in testimony to US Commission on Civil Rights briefing on government use of facial recognition
AIAAIC cited in Apple (pdf), Disney (pdf) and Paramount (pdf) shareholder AI transparency report proposals
Rao PSB., Šćepanović S., Jayagopi D.B., Cherubini M., Quercia D. The AI Model Risk Catalog: What Developers and Researchers Miss About Real-World AI Harms
European Union Intellectual Property Office. THE DEVELOPMENT OF GENERATIVE ARTIFICIAL INTELLIGENCE FROM A COPYRIGHT PERSPECTIVE
Elliott, M. T. J., & MacCarthaigh, M. Accountability and AI: Redundancy, Overlaps and Blind-Spots
Hadan H. et al. Who is Responsible When AI Fails? Mapping Causes, Entities, and Consequences of AI Privacy and Ethical Incidents
Bengio Y. et al. International AI Safety Report (pdf)
AIAAIC content is available to use, copy, adapt, and redistribute under a CC BY-SA 4.0 licence. More