Principles, frameworks, and codes for the responsible, ethical and/or trustworthy use of AI abound – tools that provide a decent starting point for organisations to consider deeper questions concerning the governance, transparency, and accountability of AI systems.
These tools can easily appear meaningless and hollow unless they are widely understood and observed across the organisation itself, and swiftly followed up with substantive, concrete action.
Transparency is often cited as a core principle of ethical AI, responsible AI, and trustworthy AI. Yet it is often practised in a partial, piecemeal, and reactive manner. There are many reasons for this, including the legitimate need to protect IP, stymie manipulation and gaming of the system, and minimise legal liability.
Opacity also provides convenient cover for any organisation not wanting to state publicly it doesn’t understand how its system works, to say one thing and do another, or to market it misleadingly or inaccurately.
The opacity of AI and algorithmic systems erodes confidence and trust. Given the importance of these technologies to government and economies across the world, policymakers are have started to mandate the transparency of AI and algorithmic systems.