Classifications and definitions
The AIAAIC Repository details incidents and controversies driven by the misuse of artificial intelligence, algorithms and automation.
ArtificiaI intelligence (AI). The capability of a machine to imitate intelligent human behaviour (source).
Algorithm. A procedure for solving a mathematical problem in a finite number of steps that frequently involves repetition of an operation (source).
Automation. The automatically controlled operation of an apparatus, process, or system by mechanical or electronic devices that take the place of human labour (source).
Exclusions
AIAAIC does not (currently) collect data for the following issues or technologies:
Geo-political issues, such as trade disputes
Legislation and standards, actual or proposed
Cryptography
NFTs, DAOs, DeSci/Fi, etc
CRISPR
Genomics/genetic algorithms
Quantum computing
AGI/super-intelligence/singularity.
Classifications
Additions to the AIAAIC Repository (sheet and website) are classified and managed in line with the definitions below.
The definitions below may be revised.
Type
Each entry is classified as a Incident, Issue, System or Data.
Incident. A sudden known or unknown event (or ‘trigger’) that becomes public and which takes the form of a disruption, loss, emergency, or crisis. Most AIAAIC Repository entries are classified as incidents.
Examples: AI system or robot malfunction; Actual or perceived inappropriate or unethical behaviour by a system operator or developer; Data privacy or info confidentiality leak exposes system vulnerabilityIssue. Concerns publicly raised about the nature and/or potential impacts of a System, but without evidence of recognised harms.
Examples: A public debate/controversy about an unlaunched system, technology, or patent; Research indicating unwarranted carbon emissions or water consumption by a systemAn issue also applies to a longer-term public debate or dispute relating to the poor, inappropriate, or unethical governance of an AI, algorithmic, or automation system that may arise because of, or lead to, one or more incidents, or, if properly managed, may subside or disappear over time. These may directly or indirectly relate to topics such as: business model, competition/collusion; employment, marketing, surveillance, anthropomorphism, ethics, supply chain management, and the environment.
Examples: Research that results in a debate about AI ethics; The unethical, opaque use of under-paid, offshore ghost-workers to label data and moderate content; Regular or systemic misleading marketing or undue hyping of an AI systemSystem. A technology programme, project, or product, and its governance. A System page is mostly appropriate for a system generating multiple incidents.
Examples: Amazon Buy Box; ChatGPT chatbot; Oregon DHS Safety at Screening ToolData. A public or proprietary dataset/database that has been shown to be inaccurate, unreliable, biased, overly intrusive, etc, and/or that results in issues or incident(s) directly or indirectly associated with the AI, algorithmic, or automation system(s) that draw(s) on it.
Examples: People in Photo Albums (PIPA) dataset; Stanford University Brainwash cafe facial recognition dataset; Google
Released
The year (and, on the website, month) a system or dataset/database is soft and/or formally launched.
Occurred
The year (and, on the website, month) an incident or issue occurs, or first occurs.
Country(ies)
The geographic origin and/or primary extent of the system/incident/issue/data.
Sector(s)
The industry (including government and non-profit) sector primarily targeted by the system or adversarial attack.
Aerospace; Agriculture; Automotive; Banking/financial services; Beauty/cosmetics; Business/professional services; Chemicals; Construction; Consumer goods; Education; Food/food services; Gambling; Govt - home/interior, welfare, police, justice, security, defence, foreign, etc; Health; Media/entertainment/sports/arts; NGO/non-profit/social enterprise; Manufacturing/engineering; Politics; Private – individual, family; Real estate sales/management; Retail; Technology; Telecoms; Tourism/leisure; Transport/logistics
Deployer(s)
The name of the individual(s), group(s), or organisation(s) deploying/managing the system or dataset/database involved in an incident or issue on a day-to-day basis, or the platforms on which the system is hosted or being carried.
Developer(s)
The name of the individual(s) or organisation(s) involved in designing, or developing/providing the system or dataset/database, and/or that commissions a system to be developed with a view to placing it on the market or putting it into service under its own name or trademark whether for payment or free of charge or that adapts general purpose systems for a specific intended purpose. There may be multiple providers along the system lifecycle.
System name(s)
The name of the system, set of systems, or dataset/database involved in an incident or issue.
Technology(ies)
The type(s) of technology deployed in the system. These include the following technology types and their application(s):
Advertising management system; Advertising authorisation system; Age detection; Anomaly detection; Augmented reality (AR); Automatic identification system (AIS); Automated license plate/number recognition (ALPR, ANPR); Behavioural analysis; Biodiversity assessment algorithm; Blockchain; Border control system; Bot/intelligent agent; Chatbot; Collaborative filtering; Colourisation; Computer vision; Content labelling system; Content ranking system; Content recommendation system; Credit score algorithm; Data matching algorithm; Decision support; Deepfake - video, audio, image, text; Deep learning; Diversity Recognition; Driver assistance system; Drone; Emotion detection; Emotion recognition; Environmental sensing; Expert system; Facial analysis; Facial detection; Facial identification; Facial recognition; Fingerprint biometrics; Gait recognition; Generative adversarial network (GAN); Gender detection; Gesture recognition; Gun/weapons detection; Gunshot detection; Heart rate variability algorithm; Image classification; Image segmentation; Image recognition; Learning management system; Link blocking; Location tracking; Location recognition; Machine learning; Mask recognition; Neural network; NLP/text analysis; Object detection; Object recognition; Order matching algorithm; Palm print scanning; Pattern recognition; Pay algorithm; Performance scoring algorithm; Personality analysis; Prediction algorithm; Pricing algorithm; Probabilistic genotyping; Rating system; Recommendation algorithm; Reinforcement learning; Resource assessment algorithm; Risk assessment algorithm; Robotic Process Automation (RPA); Robotics; Routing algorithm; Safety management system; Saliency algorithm; Scheduling algorithm; Search engine algorithm; Self-driving system; Signal processing; Social media monitoring; Sleep sensing; Smile recognition; Speech-to-text; Speech recognition; Suicide prevention algorithm; Text-to-image; Triage algorithm; Virtual currency; Virtual reality (VR); Vital signs detection; Voice recognition; Voice synthesis; Web accessibility overlay; Workforce management system
Purpose
The aim(s) of the system or dataset/database. The aim(s) may be stated, likely, or alleged, and may include:
Rank content/search results; Recommend users/groups; Recommend content; Moderate content; Minimise mis/disinformation; Identify/verify identity; Increase productivity; Increase speed to market; Improve quality; Engage users/increase revenue; Improve customer service; Increase visibility; Increase revenue/sales; Increase engagement; Moderate content; Improve insights; Cut costs; Defraud; Scare/confuse/destabilise; Damage reputation
Media trigger(s)
The internal or external trigger for a public issue or incident. These may take the form of one or more of the following:
Academic research paper/study/report; Artwork/prank; Audit report publication; Commercial investigation/experiment/hack; Commercial research study/report; Developer research study/report; Data breach/leak; FOI(A)/public records request; Investor presentation; Media investigation/coverage; Media report; Non-profit research study/report/investigation; Parliamentary enquiry/report; Patent application/approval; Police investigation/arrest; Product demonstration/release/launch; Professional body paper/study/report; Public comments/complaints; Regulatory investigation/action; Regulator paper/study/report; Researcher investigation/paper/study/report; Lab/product test; Lawsuit filing/litigation; Legal communication; Legal complaint/threat; Legislative proposal; SEC/regulatory filing; Statutory body paper/study/report; User comments/complaints; Whistleblower; White-hat hack
Issue(s)
An issue is a topic of concern posed by a technology system or it's governance, and/or its misuse or abuse by third-parties.
Accessibility. The ability of users, including the disabled, the elderly, non-internet users, and other disadvantaged people, to access and engage with a system.
Accountability. The failure of a system's designer, developer or deployer to respond to enquiries, complaints or appeals, or to provide the ability for third-parties to meaningfully audit, investigate it's human and technological behaviour and outputs.
Accuracy/reliability. The degree of accuracy of a system’s performance against stated objectives; and how reliable a system is in meeting those objectives, including whether it may deteriorate, malfunction or fail.
Authenticity/integrity. The design and use of a system in a genuine and true manner, as opposed to its use for impersonation or other insincere or dishonest purposes.
Anthropomorphism. The attribution of human traits, emotions, intentions, or behaviours to non-human entities such as AI and robotics systems.
Bias/discrimination. A system that produces results that are systemically unfair/prejudiced due to poor governance; poor data quality, or erroneous assumptions in the machine learning process.
Dual/multi-use. The actual or potential use, misuse and abuse of a system for purposes beyond its original stated purpose, including military use.
Employment. The inappropriate, unethical or illegal use of AI in the workplace, and the development and supply of such systems for employment-related purposes.
Environment. The management of a system’s actual and potential environmental impacts.
Human/civil rights. A system’s ability to directly or indirectly erode or impair the human and civil rights and freedoms of a user, group of users, or others.
Mis/disinformation. Information and data that deceives – accidentally or deliberately – end users and others.
Privacy. The personal privacy of users of AI, algorithmic, and automation systems, and of people whose data has or is being used to train these systems.
Safety. The physical and psychological safety of users and others posed by a system and its governance.
Security. The technological security of a system and its governance.
Surveillance. The use of a system to monitor people and organisations for policing, security, military, education, workplace, housing and other purposes.
Transparency. The degree and manner in which a system is understandable to users, policymakers and other stakeholders.
Harm(s)
Harms are the actual negative impacts caused by an incident, system, or dataset/database. The harms may be caused directly (sometimes known as ‘first-level’ harms) or indirectly (‘second-level’) harms.
Harm data is only accessible to Premium Members.
External
Individual. Deliberate or negligent impact(s) of a system on individuals or small groups of people using it or exposed to its misuse.
Harms include: Damage to physical property; Damage to physical health and safety; Psychological damage; Personal agency/autonomy loss; Chilling effects; Discrimination; IP and identity loss; Personal data loss; Limitation or loss of rights and freedoms; Financial loss; Reputational damage
Societal. Impact of a system on an organisation, physical community, or society.
Harms include: Damage to economic, social, and political systems/stability; Damage to national security; Damage to social infrastructure eg. transport, energy systems; Damage to business operations and infrastructure; Damage to public health; Loss of community well-being and cohesionEnvironmental. Impact of a system on the environment.
Harms include: Air pollution; Water pollution; Water shortages; Energy shortages; Excessive carbon emissions; Excessive landfill; Ground pollution; Ecological/biodiversity damage
Internal
Strategic/reputational. Impact of a system on the strategy and/or reputation of the organisation/individual commissioning/deploying it.
Harms include: CEO/senior leadership termination; Inability to recruit; Inability to raise capital; Customer backlash/boycott; Partner backlash; Supplier backlash; Political backlash; Loss of credibility/confidence/trust; Loss of license to operateOperational. Impact of a system on the operations of the organisation/individual commissioning/deploying it.
Harms include: System review/update; System suspension; System/product recall; System termination; Internal/external assessment; Internal/external audit; Internal/external investigation; Employee complaints/petitions; Employee walk-outs/strikes; Employee loss/churnFinancial. Impact of a system on the finances of the organisation/individual commissioning/deploying it.
Harms include: Revenue loss; Margin loss; Market share loss; Share price loss; Liability loss; FinesLegal/regulatory. Impact of a system on the legal/regulatory/compliance activities of the organisation/individual commissioning/deploying it.
Harms include: Legal complaint/filing; Litigation; Regulatory inquiry/investigation; Legislative inquiry; Legislative questions/complaints.
Last updated: February 16, 2025