Transparency is regularly cited as a core principle of ethical, responsible and trustworthy AI.
However, rhetoric and reality are often poles apart, with transparency approached in a partial, piecemeal and reactive manner.
Here's why real AI and algorithmic transparency and openness is needed, and what it should look like.
The transparency and openness of artificial intelligence, algorithms, and automation, and of the organisations that design, develop and deploy them, poses significant opportunities and benefits, as well as challenges and risks.
Fundamental to many people’s daily lives, shaping their experiences, beliefs, and behaviours
Increasingly central to the daily workings of government, companies, media, and other organisations
Reflect the people who design, build, and manage them
The risks of AI, algorithmic and automation are increasing
Systems are becoming larger and more complex
Fragmented standards, little legislation
Understanding, confidence and trust in AI, algorithms and automation are low
Expectations of fair, transparent, and accountable systems are rising
Scrutiny by legislators, regulators, NGOs, and the media is increasing
Societal, environmental and reputational risks are escalating
Many AI, algorithmic and automation systems are opaque and unaccountable
Typically, users have little or no idea how the systems they are using or are being assessed, nudged, coerced or manipulated by work
Equally, they may not know they are interacting with AI, algorithmic and automation systems at all
The actual or potential workings, limitations, risks and impacts of AI, algorithmic and automation systems are often concealed, disguised or deflected
Many AI and algorithmic systems are unavailable for meaningful external analysis or investigation
AI, algorithmic and automation transparency efforts lack substance
User information tends to be narrow, skin-deep and piecemeal
Feedback, complaint and appeal systems are slow and partial
Remain largely the preserve of technology experts
Transparency and openness are two-edged swords
Opportunity to build confidence, esteem, and trust
Increases visibility and expectations whilst exposing AI systems to manipulation, fraud, legal liability, and loss of IP
Facilitates and shapes AI and algorithmic governance and ethical decision-making - for better or worse
Requires compromise/trade-offs.
Having weighed the limitations and consequences of transparency with organisational, technical, individual, societal and other risks, every organisation designing, developing or deploying AI, algorithmic or automation systems should adopt the following four principles:
Meaningful
Provide clear, concise, accurate, timely, and usable information
Consistent across all channels and to all audiences
Visible at the first point of contact with the system, and across all relevant touchpoints
Responsive to feedback, complaints and appeals
Honest, including during incidents, crises and controversies
Verifiable
Provide reasonable third-party expert access to data, code and model
Make third-party audit and investigatory reports public
Inclusive
Involve users and other relevant stakeholders in system design, development and deployment
Provide formal feedback loops to relevant audiences, and share feedback as widely as possible
Accessible
To all relevant users, including the disabled, the elderly, non-internet users, and other disadvantaged people.
Updated: October 9, 2024