Transparency is regularly cited as an important, if not essential, pillar of AI governance, ethics, and accountability. However, the transparency of AI and algorithmic systems remains patchy and, in some instances, almost non-existent.
There are many common explanations for this reluctance to be open about AI systems: the potential loss of IP, greater legal liability and the higher likelihood of manipulation and gaming. And, why be open when there is no legal requirement to do so?
With much focus on the technical aspects of making neural networks and other technologies intelligible (or explainable) to their designers and developers, an understanding of broader institutional transparency is paper-thin amongst many technology professionals.
Furthermore, despite widespread public concerns about algorithmic opacity, a spate of high-profile incidents and controversies, and mandatory AI transparency looming in the EU and elsewhere, research indicates transparency is getting worse (pdf).