AIAAIC repository governance

Set up in June 2019, the AIAAIC repository details incidents and controversies driven by and relating to AI, algorithms and automation.

This page sets out how the repository is managed, including how incidents and controversies are identified, assessed, approved and managed.

Governance

AIAAIC is managed as follows:

  • Management: The AIAAIC repository is edited and managed by Charlie Pownall (the Managing Editor).

  • Editors: Every effort is made to ensure the accuracy, fairness and comprehensiveness of the AIAAIC repository. Edits of/updates to the AIAAIC repository may only be made by approved editors. The selection and approval of editors take into account an individual’s background, interests, employment, affiliations, and other criteria. Upon approval, editors are trained how to edit the repository and can update existing entries without the approval of the Managing Editor.

  • Usage: Anyone can use, copy, redistribute, and adapt the contents of the repository in line with its Attribution 4.0 International (CC by 4.0) license. Comments can be added to the repository by all registered users.

  • Complaints: AIAAIC is committed to handling complaints in a fair and transparent manner. Complaints can be submitted by emailing info@aiaaic.org. Complaints are assessed by the Managing Editor and decisions are published on the AIAAIC website.

  • Copyright: Abuse of AIAAIC copyright is handled by the Managing Editor.

Definitions

The following definitions are used for the purposes of the AIAAIC repository:

  • Incident: A sudden known or unknown event (or ‘trigger’) that becomes public and which takes the form of, or can lead to, a disruption, loss, emergency, or crisis. An incident is sometimes classified as a lower-level crisis.

  • Controversy: A heated and prolonged public dispute or debate between one or more parties holding different or opposing opinions. A controversy (or ‘issue’) may arise as a result of, or lead to, one or more incidents.

  • ArtificiaI intelligence (AI): The capability of a machine to imitate intelligent human behaviour (source).

  • Algorithm: A procedure for solving a mathematical problem in a finite number of steps that frequently involves repetition of an operation (source).

  • Automation: The automatically controlled operation of an apparatus, process, or system by mechanical or electronic devices that take the place of human labour (source).

  • System: The technology programme, project or product and its governance.

  • 'Driven by’: A negative event or situation in which an incident and/or controversy is directly triggered by an AI, algorithmic or automation system, including its’ technology and/or governance. The event or situation may be deliberate or accidental and may be triggered by internal factors such as algorithmic opacity, misleading marketing or poor ethics (or by the perception thereof), or by external factors such as a third-party research report, audit, investigation, cyberattack or data breach, or the gaming/manipulation of the system.

  • ‘Relating to’: The technology and/or its’ governance may be one of several factors contributing to the incident or controversy and harm caused, for example, disputes concerning facial recognition, emotion recognition, AI ethics, algorithmic auditing, Uyghur/Xinjiang human rights or climate change.

Incident/controversy identification and collection

The following tools and techniques are used to identify and collect AI, algorithmic and automation incidents and controversies:

  • Identification: A number of tools and techniques are used to identify incidents and controversies, including Google Alerts (keywords include: ‘AI incident’ ‘AI controversy’, ‘AI ethics’, ‘AI transparency’, ‘AI risk’, ‘AI reputation’) and subscriptions to newsletters and websites such as Algorithm Watch, MIT Technology Review, ScienceWiki, Biometric Update, The Markup, Wired, Protocol, Platformer, The Guardian, STAT.

Incidents and controversies are also submitted by researchers, NGOs and others via social media or by using the AIAAIC incident report form.

  • Collection: Actual and potential incidents and controversies are collected using Google Alerts, RSS, email newsletters and other tools, and are stored and listed for tracking, assessment, and approval (see below) in a transparent manner on the Pending sheet/tab.


Incident/controversy assessment and approval

The following criteria are used to assess, approve, and update entries to the AIAAIC repository.

Assessment

  • Relevance: Entries must be driven by or relating to AI, algorithms, and automation (per the above definitions). The repository does not cover super-intelligence, singularity, and related meta controversies.

  • Nature: Sometimes the novelty of a third-party attack, blatant disregard for ethics or lack of openness about an incident speaks for itself and leads to an incident being added to the AIAAIC repository. On the other hand, an incident may not be added if the facts remain unclear or the impacts (see below) are seen to be minimal.

  • Impact(s): An incident or controversy must have resulted – or be seen to have resulted – in clear harm or damage to its users, society, and the environment (ie. external impacts), and/or to the governance of the organisation designing, developing, deploying, or monitoring the relevant system (ie. internal impacts).

The harm or damage caused by an incident may range from critical to very low to positive (bearing in mind that an incident or controversy can have a positive impact on revenue); impact one or more audiences and have been directly driven by or indirectly connected with the system.

Internal impacts

Strategic/reputational: Where the incident or controversy negatively impacts the organisation’s business model; impairs its ability to innovate; calls into question its mission or purpose; contravenes its business values and ethics; or where it results in backlashes or boycotts (including campaigns and petitions) by users, customers, employees, politicians, and other stakeholders, and/or in the resignation or termination of senior leaders or team members.

Operational: Where the incident or controversy results in the system being reviewed, strengthened, investigated, suspended, recalled, or terminated. Also covers labour practices; employment; employee engagement, diversity and inclusion; incident, crisis and risk management.

Financial: Where the incident or controversy negatively impacts revenue, margin, market share, stock/share price, or insurance liability.

Legal/regulatory: Where the incident or controversy results in legal, legislative, or regulatory complaints, threats, investigations, warnings, fines, disputes, or litigation.

External impacts

Individual: Physical, material, and non-material; limitation of civil rights and freedoms, including discrimination and privacy; identity theft and fraud; financial loss; loss of confidentiality; reputation.


Societal: economic, social, and political systems and stability; societal (transport, energy, etc) and business infrastructure; local community wellbeing and cohesion.


Environmental: air quality; emissions; energy management; water and wastewater management; waste and hazardous materials management; ecological impacts.

  • Trustworthiness: An incident or controversy is likely to have been covered by media sources rated as credible and trustworthy by NewsGuard, JTI, and/or an equivalent news rating organisation or programme. Commentary and analysis by relevant, credible and authoritative opinion-formers (aka ‘influencers’) is also considered.

  • Volume: The more an incident or controversy is covered by the media and opinion-formers, notably the international and business media, and mentioned in social media, the more likely it is to be added to the AIAAIC repository. However, this is not always the case: some incidents draw little attention but have a severe or critical impact on the system under scrutiny.

  • Fairness and accuracy: Each incident and/or controversy is assessed objectively on the publicly available data, facts and by commentary and analysis by credible and trustworthy media sources and opinion-formers (see above). Links are publicly provided to original research, legal documents, and company statements (where available), and to a range of high-quality media coverage.

Approval

  • New additions/entries to the AIAAIC repository must be approved by the Managing Editor.

Updated: September 17, 2021

Further information

Contact AIAAIC