AIAAIC is an independent, non-partisan, grassroots public interest initiative that examines and makes the case for real AI, algorithmic and automation transparency, openness and accountability. More
OpenAI's o1 reasoning model may be wowing the tech fraternity but its system card reveals the company is light on explaining its risk models and fails to disclose its training data and environmental impacts.
A wide range of legal and quasi-legal tools and techniques are widely used to restrict the transparency, openness and accountability of AI systems, sometimes disproportionately and without merit.
Get to grips with incidents and controversies driven by and associated with AI and related technologies, and the implications of the technology and governance systems behind them for individuals, society and the environment.
Equipping researchers, civil society organisations and the general public to better understand and take action on AI and related technology harms and violations.
We love to hear what we're doing right, where we can improve, and what you think we should focus on going forward.
AIAAIC calls for participation in classifying AI and algorithmic harms
AIAAIC publishes community code of conduct
AIAAIC releases taxonomy of AI and algorithmic harms
AIAAIC is cited in testimony to US Commission on Civil Rights briefing on government use of facial recognition
AIAAIC is cited in Apple, Disney and Paramount shareholder AI transparency report proposals
Bracket Foundation, Value for Good GmbH. Generative AI. A New Threat For Online Child Sexual Exploitation and Abuse (pdf)
International Monetary Fund. Global Financial Stability Report - Advances in Artificial Intelligence: Implications for Capital Market Activities
Geneva Academy/ICRC. Expert Consultation Report on AI and Related Technologies in Military Decision-Making on the Use of Force in Armed Conflicts
Innovate UK BridgeAI. Bridging the AI Divide (pdf)
Marchal M. et al. Generative AI Misuse: A Taxonomy of Tactics and Insights from Real-World Data
AIAAIC believes that AI, algorithms, and automation, and the organisations and individuals involved in their design, development, and deployment, must be transparent and honest about their aims and how they go about pursuing them. More
Transparency is cited as a core principle of ethical, responsible and trustworthy AI. But it is often approached in a partial, piecemeal and reactive manner.
Here's why real, meaningful transparency and openness of AI, algorithmic and automation systems is needed, and what we believe it should look like.
AIAAIC content is available to use, copy, adapt, and redistribute under a CC BY-SA 4.0 licence. More