Definitions and classifications
Type
System. A technology programme, project, or product, and its governance. A System page is mostly appropriate for a system generating multiple incidents.
Examples: Tesla Autopilot; ChatGPT; Oregon DHS Safety at Screening ToolIncident. A sudden known or unknown event (or ‘trigger’) that becomes public and which takes the form of, or can lead to a disruption, loss, emergency, or crisis. Most AIAAIC Repository entries are classified as incidents.
Examples: AI system or robot malfunction; Actual or perceived inappropriate or unethical behaviour by a system operator or developer; Data privacy or info confidentiality leak exposes system vulnerabilityIssue. Concerns publicly raised about the nature and/or potential impacts of a System, but without evidence of a public incident or recognised harms.
Examples: A public debate/controversy about an unlaunched system, technology, or patent; Research indicating unwarranted carbon emissions or water consumption by a system
An issue also applies to a longer-term public debate or dispute relating to the poor, inappropriate, or unethical governance of an AI, algorithmic, or automation system that may arise because of, or lead to, one or more incidents, or, if properly managed, may subside or disappear over time.
Examples: Research that results in a debate about AI ethics; The unethical, opaque use of under-paid, offshore ghost-workers to label data and moderate content; Regular or systemic misleading marketing or undue hyping of an AI systemData. A public or proprietary dataset/database that has been shown to be inaccurate, unreliable, biased, overly intrusive, etc, and/or that results in issues or incident(s) directly or indirectly associated with the AI, algorithmic, or automation system(s) that draw(s) on it.
Examples: Google GoEmotions dataset mis-labelling; People in Photo Albums (PIPA) dataset; Stanford University
Released
The year (and, on the website, month) a system or dataset/database is soft and/or formally launched.
Occurred
The year (and, on the website, month) an incident or issue occurs, or first occurs.
Country
The geographic origin and/or primary extent of the system/incident/issue/data.
Sector
The industry (including government and non-profit) sector primarily targeted by the system or adversarial attack.
Aerospace; Agriculture; Automotive; Banking/financial services; Business/professional services; Chemicals; Construction; Consumer goods; Cosmetics; Education; Food/food services; Gambling; Govt - home/interior, welfare, police, justice, security, defence, foreign, etc; Health; Media/entertainment/sports/arts; NGO/non-profit/social enterprise; Manufacturing/engineering; Politics; Private – individual, family; Real estate sales/management; Retail; Technology; Telecoms; Tourism/leisure; Transport/logistics
Operator
The individual(s) or organisation(s) operating (or ‘deploying’) the system or dataset/database involved in an incident or issue.
Developer
The individual(s) or organisation(s) involved in developing the system or dataset/database, and in supplying data, code, or advice.
System name
The name of the system or dataset/database involved in an incident or issue.
Technology
The type(s) of technology deployed in the system. These include the following technology types and their application(s):
Advertising management system; Advertising authorisation system; Age detection; Anomaly detection; Augmented reality (AR); Automatic identification system (AIS); Automated license plate/number recognition (ALPR, ANPR); Behavioural analysis; Biodiversity assessment algorithm; Blockchain; Border control system; Bot/intelligent agent; Chatbot; Collaborative filtering; Colourisation; Computer vision; Content labelling system; Content ranking system; Content recommendation system; Credit score algorithm; Data matching algorithm; Decision support; Deepfake - video, audio, image, text; Deep learning; Diversity Recognition; Driver assistance system; Drone; Emotion detection; Emotion recognition; Environmental sensing; Expert system; Facial analysis; Facial detection; Facial identification; Facial recognition; Fingerprint biometrics; Gait recognition; Generative adversarial network (GAN); Gender detection; Gesture recognition; Gun/weapons detection; Gunshot detection; Heart rate variability algorithm; Image classification; Image segmentation; Image recognition; Learning management system; Link blocking; Location tracking; Location recognition; Machine learning; Mask recognition; Neural network; NLP/text analysis; Object detection; Object recognition; Order matching algorithm; Palm print scanning; Pattern recognition; Pay algorithm; Performance scoring algorithm; Personality analysis; Prediction algorithm; Pricing algorithm; Probabilistic genotyping; Rating system; Recommendation algorithm; Reinforcement learning; Resource assessment algorithm; Risk assessment algorithm; Robotic Process Automation (RPA); Robotics; Routing algorithm; Safety management system; Saliency algorithm; Scheduling algorithm; Search engine algorithm; Self-driving system; Signal processing; Social media monitoring; Sleep sensing; Smile recognition; Speech-to-text; Speech recognition; Suicide prevention algorithm; Text-to-image; Triage algorithm; Virtual currency; Virtual reality (VR); Vital signs detection; Voice recognition; Voice synthesis; Web accessibility overlay; Workforce management system
Purpose
The aim(s) of the system or dataset/database. The aim(s) may be stated, likely, or alleged, and may include:
Rank content/search results; Recommend users/groups; Recommend content; Moderate content; Minimise mis/disinformation; Identify/verify identity; Increase productivity; Increase speed to market; Improve quality; Engage users/increase revenue; Improve customer service; Increase visibility; Increase revenue/sales; Increase engagement; Moderate content; Improve insights; Cut costs; Defraud; Scare/confuse/destabilise; Damage reputation
Media trigger
The internal or external trigger for a public issue or incident. These may take the form of one or more of the following:
Academic research paper/study/report; Artwork/prank; Audit report publication; Commercial investigation/experiment/hack; Commercial research study/report; Developer research study/report; Data breach/leak; FOI(A)/public records request; Investor presentation; Media investigation/coverage; Media report; Non-profit research study/report/investigation; Parliamentary enquiry/report; Patent application/approval; Police investigation/arrest; Product demonstration/release/launch; Professional body paper/study/report; Public comments/complaints; Regulatory investigation/action; Regulator paper/study/report; Researcher investigation/paper/study/report; Lab/product test; Lawsuit filing/litigation; Legal communication; Legal complaint/threat; Legislative proposal; SEC/regulatory filing; Statutory body paper/study/report; User comments/complaints; Whistleblower; White-hat hack
Risks
The risks posed by the system, its governance, and the external context, or by its misuse by bad actors.
Accuracy/reliability: The degree of accuracy of a system’s performance against stated objectives; and how reliable a system is in meeting those objectives, including whether it may deteriorate, malfunction, or fail.
Risks include: System inaccuracy; System deterioration; System malfunction; System failureAnthropomorphism. The attribution of human traits, emotions, intentions, or behaviours to non-human entities such as AI and robotics systems. Sometimes seen as a risk, especially in the west, anthropomorphism can also be seen as a benefit/opportunity, notably in Japan.
Risks include: Robot/bot emotionalisation; Robot/bot addiction; Robot/bot divination; Robot/bot fetishisation; Claims of algorithmic sentience; Robot rightsBias/discrimination. A system that produces results that are systemically unfair/prejudiced due to poor governance; poor data quality, or erroneous assumptions in the machine learning process.
Risks include, as legally protected classes (source): Age; Race; Ethnicity; Gender; Gender identity/expression; Sexual orientation (LGBTQ); Religion; Disability; Income; Profession/job; Location; Political persuasion; Familial status; Marital status; Military or veteran status; Pregnancy; National origin; National extraction/social origin; Family/carer responsibility; Recipient of public assistance
Dual/multi-use. The actual or potential use, abuse, and misuse of a technology or system for multiple purposes, including military use.
Risks include: Deliberate development of dangerous/lethal weapons; Accidental development of dangerous/lethal weapons; Inappropriate/unethical data sharing; Inappropriate/unethical code sharing; Inappropriate/unethical model sharing
Employment. The inappropriate, unethical or illegal use of AI in the workplace, and the development and supply of such systems for employment-related purposes.
Risks include: Opaque or unfair employee recruitment; Opaque, excessive, or illegal employee surveillance; Opaque or unfair employee pay or compensation; Opaque or unfair employee terminations; Opaque, excessive or inaccurate workplace safety monitoring; Excessive, unethical, or illegal workplace automation; Opaque, unfair, or disproportionate workforce dislocation or replacementEnvironment. The management of a system’s actual and potential environmental impacts.
Risks include: Carbon dioxide emissions; Water consumption; Water pollution; Waste disposal; Hazardous materials disposal; Ecology/biodiversity; Rare earth consumption; Local community rights and freedoms
Governance. The strategic and operational management of a technology system.
Risks include: Appropriateness/need; Business model; Capabilities/skills; Capacity/resources; Competition/collusion; Complaints/appeals; Conflicts of interest; Dual/multi-use; Effectiveness/value; Hypocrisy; Incentivisation; Inclusiveness/diversity; Leadership understanding/support; Oversight/review; Ownership/accountability; Purpose; Risk management; Scope creep/normalisation; Supply chain management; Values/culture/ethics
Human/civil rights. A system’s ability to directly or indirectly erode or impair the human and civil rights and freedoms of a user, group of users, or others.
Risks include: Freedom of expression (censorship, assembly, association, petition, press, religion, speech); Freedom of information (including right to know); Right to collective action; Right to work; Right to due process; Protection of minors, elderly, and disabled people
Legal. A system’s actual and potential ability to expose users and others to legal risks, as well as legal exposure to the organisations(s) developing and operating it.
Risks include: Compliance; Confidentiality; Competition/price fixing; Defamation/libel; IP/copyright; Identity theft/impersonation; Fraud; Necessity/proportionality; Product liability; PrivacyMis/disinformation. Information and data that deceives – accidentally or deliberately – end users and others.
Risks include: Training data; Algorithm design; Software guardrails; User policies/terms; Usage monitoring; Usage enforcement; Complaints/appeals process
Privacy. The personal privacy of users of AI, algorithmic, and automation systems, and of people whose data has or is being used to train these systems.
Risks include: Data collection; Data cleaning; Data anonymisation; Data encryption; Data breach; Data leak; Cyberattack; Third-party data security
Safety. The physical and psychological safety of users and others posed by a system and its governance.
Risks include: Addiction; Anxiety/depression; Self-harm/suicide; Radicalisation; Hate speech/violence; Harassment/abuse; Intimidation; Shaming; Stalking
Security. The technological security of AI, algorithmic and automation systems, and their governance.
Risks include: Adversarial attacks; Data leak/exposure; Data theft; Confidentiality breach; IP/copyright abuse; Identity theft/impersonation; Fraud
Surveillance. The use of AI, algorithms and automation to monitor people and organisations for policing, security, military, education, workplace, housing, and other purposes.
Risks include: Appropriateness/need; Business model; Data breach; Data leak; Dual/multi-use; Scope creep/normalisation; Compliance; Necessity/proportionality; Whistleblower
Transparency. The degree and manner in which a system is visible, accessible, and understandable to users and other stakeholders, and contestable by them.
Risks include:
Governance. Public disclosure of the purpose and management of the system.
Privacy. Public disclosure of the collection, processing and storage of personal and sensitive data.
Complaints/appeals. Guidance and procedures on how users and others can make a complaint or file an appeal.
Black box access. Third-party ability to understand, assess, audit, or investigate a system and/or its governance.
Legal. Use of legal and/or quasi-legal tools to restrict meaningful access to a system, or to the disclosure of clear, understandable information about it.
Marketing. Communication and promotion of the system.
Design/experience. Design and usability of the system.
Harms
(Harm data is only available to Premium Members.)
External
Individual. Deliberate or negligent impact(s) of a system on individuals or small groups of people using it or exposed to its misuse.
Harms include: Damage to physical property; Damage to physical health and safety; Psychological damage; Personal agency/autonomy loss; Chilling effects; Discrimination; IP and identity loss; Personal data loss; Limitation or loss of rights and freedoms; Financial loss; Reputational damage
Societal. Impact of a system on an organisation, physical community, or society.
Harms include: Damage to economic, social, and political systems/stability; Damage to national security; Damage to social infrastructure eg. transport, energy systems; Damage to business operations and infrastructure; Damage to public health; Loss of community well-being and cohesionEnvironmental. Impact of a system on the environment.
Harms include: Air pollution; Water pollution; Water shortages; Energy shortages; Excessive carbon emissions; Excessive landfill; Ground pollution; Ecological/biodiversity damage
Internal
Strategic/reputational. Impact of a system on the strategy and/or reputation of the organisation/individual commissioning/deploying it.
Harms include: CEO/senior leadership termination; Inability to recruit; Inability to raise capital; Customer backlash/boycott; Partner backlash; Supplier backlash; Political backlash; Loss of credibility/confidence/trust; Loss of license to operateOperational. Impact of a system on the operations of the organisation/individual commissioning/deploying it.
Harms include: System review/update; System suspension; System/product recall; System termination; Internal/external assessment; Internal/external audit; Internal/external investigation; Employee complaints/petitions; Employee walk-outs/strikes; Employee loss/churnFinancial. Impact of a system on the finances of the organisation/individual commissioning/deploying it.
Harms include: Revenue loss; Margin loss; Market share loss; Share price loss; Liability loss; FinesLegal/regulatory. Impact of a system on the legal/regulatory/compliance activities of the organisation/individual commissioning/deploying it.
Harms include: Legal complaint/filing; Litigation; Regulatory inquiry/investigation; Legislative inquiry; Legislative questions/complaints.
Last updated: May 23, 2023