Classifications and definitions
Each entry is classified as a System, Incident, Issue, or Data.
System. A technology programme, project, or product, and its governance. A System page is mostly appropriate for a system generating multiple incidents.
Examples: Amazon Buy Box; ChatGPT; Oregon DHS Safety at Screening Tool
Incident. A sudden known or unknown event (or ‘trigger’) that becomes public and which takes the form of, or can lead to a disruption, loss, emergency, or crisis. Most AIAAIC Repository entries are classified as incidents.
Examples: AI system or robot malfunction; Actual or perceived inappropriate or unethical behaviour by a system operator or developer; Data privacy or info confidentiality leak exposes system vulnerability
Issue. Concerns publicly raised about the nature and/or potential impacts of a System, but without evidence of a public incident or recognised harms.
Examples: A public debate/controversy about an unlaunched system, technology, or patent; Research indicating unwarranted carbon emissions or water consumption by a system
An issue also applies to a longer-term public debate or dispute relating to the poor, inappropriate, or unethical governance of an AI, algorithmic, or automation system that may arise because of, or lead to, one or more incidents, or, if properly managed, may subside or disappear over time.
These may directly or indirectly relate to topics such as: business model, competition/collusion; employment, marketing, surveillance, anthropomorphism, ethics, supply chain management, and the environment.
Examples: Research that results in a debate about AI ethics; The unethical, opaque use of under-paid, offshore ghost-workers to label data and moderate content; Regular or systemic misleading marketing or undue hyping of an AI system
Data. A public or proprietary dataset/database that has been shown to be inaccurate, unreliable, biased, overly intrusive, etc, and/or that results in issues or incident(s) directly or indirectly associated with the AI, algorithmic, or automation system(s) that draw(s) on it.
The year (and, on the website, month) a system or dataset/database is soft and/or formally launched.
The year (and, on the website, month) an incident or issue occurs, or first occurs.
The geographic origin and/or primary extent of the system/incident/issue/data.
The industry (including government and non-profit) sector primarily targeted by the system or adversarial attack.
Aerospace; Agriculture; Automotive; Banking/financial services; Beauty/cosmetics; Business/professional services; Chemicals; Construction; Consumer goods; Education; Food/food services; Gambling; Govt - home/interior, welfare, police, justice, security, defence, foreign, etc; Health; Media/entertainment/sports/arts; NGO/non-profit/social enterprise; Manufacturing/engineering; Politics; Private – individual, family; Real estate sales/management; Retail; Technology; Telecoms; Tourism/leisure; Transport/logistics
The name of the individual(s) or organisation(s) involved in developing the system or dataset/database, and in supplying data, code, or advice.
The type(s) of technology deployed in the system. These include the following technology types and their application(s):
Advertising management system; Advertising authorisation system; Age detection; Anomaly detection; Augmented reality (AR); Automatic identification system (AIS); Automated license plate/number recognition (ALPR, ANPR); Behavioural analysis; Biodiversity assessment algorithm; Blockchain; Border control system; Bot/intelligent agent; Chatbot; Collaborative filtering; Colourisation; Computer vision; Content labelling system; Content ranking system; Content recommendation system; Credit score algorithm; Data matching algorithm; Decision support; Deepfake - video, audio, image, text; Deep learning; Diversity Recognition; Driver assistance system; Drone; Emotion detection; Emotion recognition; Environmental sensing; Expert system; Facial analysis; Facial detection; Facial identification; Facial recognition; Fingerprint biometrics; Gait recognition; Generative adversarial network (GAN); Gender detection; Gesture recognition; Gun/weapons detection; Gunshot detection; Heart rate variability algorithm; Image classification; Image segmentation; Image recognition; Learning management system; Link blocking; Location tracking; Location recognition; Machine learning; Mask recognition; Neural network; NLP/text analysis; Object detection; Object recognition; Order matching algorithm; Palm print scanning; Pattern recognition; Pay algorithm; Performance scoring algorithm; Personality analysis; Prediction algorithm; Pricing algorithm; Probabilistic genotyping; Rating system; Recommendation algorithm; Reinforcement learning; Resource assessment algorithm; Risk assessment algorithm; Robotic Process Automation (RPA); Robotics; Routing algorithm; Safety management system; Saliency algorithm; Scheduling algorithm; Search engine algorithm; Self-driving system; Signal processing; Social media monitoring; Sleep sensing; Smile recognition; Speech-to-text; Speech recognition; Suicide prevention algorithm; Text-to-image; Triage algorithm; Virtual currency; Virtual reality (VR); Vital signs detection; Voice recognition; Voice synthesis; Web accessibility overlay; Workforce management system
The aim(s) of the system or dataset/database. The aim(s) may be stated, likely, or alleged, and may include:
Rank content/search results; Recommend users/groups; Recommend content; Moderate content; Minimise mis/disinformation; Identify/verify identity; Increase productivity; Increase speed to market; Improve quality; Engage users/increase revenue; Improve customer service; Increase visibility; Increase revenue/sales; Increase engagement; Moderate content; Improve insights; Cut costs; Defraud; Scare/confuse/destabilise; Damage reputation
The internal or external trigger for a public issue or incident. These may take the form of one or more of the following:
Academic research paper/study/report; Artwork/prank; Audit report publication; Commercial investigation/experiment/hack; Commercial research study/report; Developer research study/report; Data breach/leak; FOI(A)/public records request; Investor presentation; Media investigation/coverage; Media report; Non-profit research study/report/investigation; Parliamentary enquiry/report; Patent application/approval; Police investigation/arrest; Product demonstration/release/launch; Professional body paper/study/report; Public comments/complaints; Regulatory investigation/action; Regulator paper/study/report; Researcher investigation/paper/study/report; Lab/product test; Lawsuit filing/litigation; Legal communication; Legal complaint/threat; Legislative proposal; SEC/regulatory filing; Statutory body paper/study/report; User comments/complaints; Whistleblower; White-hat hack
The risks posed by a system, its governance or by third-parties, or revealed during an incident or issue. Risks may be known, partially unknown, or unknown.
Harms are the actual negative impacts caused by an incident, system, or dataset/database. The harms may be caused directly (sometimes known as ‘first-level’ harms) or indirectly (‘second-level’) harms.
Harm data is only accessible to Premium Members.
Harms include: Damage to physical property; Damage to physical health and safety; Psychological damage; Personal agency/autonomy loss; Chilling effects; Discrimination; IP and identity loss; Personal data loss; Limitation or loss of rights and freedoms; Financial loss; Reputational damage
Strategic/reputational. Impact of a system on the strategy and/or reputation of the organisation/individual commissioning/deploying it.
Harms include: CEO/senior leadership termination; Inability to recruit; Inability to raise capital; Customer backlash/boycott; Partner backlash; Supplier backlash; Political backlash; Loss of credibility/confidence/trust; Loss of license to operate
Harms include: Legal complaint/filing; Litigation; Regulatory inquiry/investigation; Legislative inquiry; Legislative questions/complaints.
Last updated: June 21, 2023