AIAAIC is an independent, non-partisan, grassroots public interest initiative that examines and makes the case for real AI, algorithmic and automation transparency, openness and accountability. More

OpenAI's o1 reasoning model may be wowing the tech fraternity but its system card reveals the company is light on explaining its risk models and fails to disclose its training data and environmental impacts.

A wide range of legal and quasi-legal tools and techniques are widely used to restrict the transparency, openness and accountability of AI systems, sometimes disproportionately and without merit.

SUBSCRIBE 📧 | DONATE 💲 | VOLUNTEER 💁 | PARTNER 👯

Get to grips with incidents and controversies driven by and associated with AI and related technologies, and the implications of the technology and governance systems behind them for individuals, society and the environment.

Equipping researchers, civil society organisations and the general public to better understand and take action on AI and related technology harms and violations.

We love to hear what we're doing right, where we can improve, and what you think we should focus on going forward.

IEEE Spectrum logo
Liberation newspaper logo
Hindustan Times logo

SUBSCRIBE 📧 | DONATE 💲 | VOLUNTEER 💁 | PARTNER 👯

AIAAIC believes that AI, algorithms, and automation, and the organisations and individuals involved in their design, development, and deployment, must be transparent and honest about their aims and how they go about pursuing them. More

Photo of 'Robot in Harajuku' by MaximalFocus

Transparency is cited as a core principle of ethical, responsible and trustworthy AI. But it is often approached in a partial, piecemeal and reactive manner.

Here's why real, meaningful transparency and openness of AI, algorithmic and automation systems is needed, and what we believe it should look like.

AIAAIC content is available to use, copy, adapt, and redistribute under a CC BY-SA 4.0 licence. More

AIAAIC CC BY-SA 4.0 license