Transparency is regularly cited as an important, if not essential, pillar of AI governance, ethics, and accountability. However, the transparency of AI and algorithmic systems remains patchy and, in some instances, almost non-existent.
There are many common explanations for this reluctance to be open about AI systems: the potential loss of IP, greater legal liability and the higher likelihood of manipulation and gaming. And, why be open when there is no legal requirement to do so?
With much focus on the technical aspects of making neural networks and other technologies intelligible (or explainable) to their designers and developers, an understanding of broader institutional transparency is paper-thin amongst many technology professionals.
Furthermore, despite widespread public concerns about algorithmic opacity, a spate of high-profile incidents and controversies, and mandatory AI transparency looming in the EU and elsewhere, research indicates transparency is getting worse (pdf).
In order to understand current practices and inform the development of an AI transparency framework, taxonomy, and white paper, AIAAIC is researching answers to the following types of questions:
What are the current models, types, and forms of transparency for organisations using AI, algorithms, and automation?
How are these models, types, and forms applied in practice?
The research comprises:
Literature review of current transparency models, types, and forms by governments, companies and civil society organisations designing and using AI, algorithmic and automation systems
Scan of the AIAAIC repository for incidents and controversies driven by, or being seen to be driven by inadequate or poor transparency
Scan of corporate websites, apps, reports, etc using relevant keywords and phrases.
Future research will set the findings of this study against observed transparency best practices from corporate financial reporting, law, communications, marketing, and other fields.