AI/automation ethics glossary
Dual use
AI/automation ethics glossary
Dual use
Dual use refers to the design and development of an AI/automated system for multiple purposes, or its use/misuse for purposes beyond its original stated purpose, including military and criminal use.
Dual use in AI describes the inherent double-edged nature of many AI capabilities.
A generative AI model trained to assist researchers can also help bad actors synthesise dangerous substances. A facial recognition system designed to find missing children can be repurposed for mass surveillance. A language model built to improve communication can produce convincing disinformation or phishing attacks at scale.
Unlike traditional dual-use technologies (e.g. nuclear energy, chemistry), AI represents a form of crossover technology that is different from previous historical examples of dual- or multi-use technology, as it has a multiplicative effect across other technologies, amplifying the reach, speed, and sophistication of both beneficial and harmful applications simultaneously.
Dual use matters because the social benefits of AI can be undermined if the same technology is used to cause large-scale harm. This can erode trust in institutions, increase insecurity, and make it harder to safely share useful innovation.
It also matters the democratisation of powerful tools increases the risk of large-scale harm, destabilising the balance between innovation and public safety. These harms can spread quickly when AI is used for phishing, deepfakes, or automated cyber exploitation.
In conflict settings, dual use AI can amplify the risks to civilian safety, and create accountability gaps when harmful outcomes occur.
The problem is compounded by the speed of development of AI systems, which consistently outpaces international governance and national and regional regulation, and by the global, borderless nature of their deployment, making containment highly challenging.
When dual-use norms break down, the consequences span multiple domains:
National security. Adversarial states and non-state actors can access AI tools to increase the speed, scale and sophistication of attacks weapons development, cyberattacks, and influence operations. Optimisation algorithms for delivery drones can be repurposed for "slaughterbots" or autonomous lethal weapons.
Public health. AI-assisted access to information about dangerous pathogens could lower the barrier for bioterrorism. Models that predict protein folding for drug discovery can be "inverted" to design novel biochemical toxins.
Economic harm. Dual-use AI tools used in fraud, market manipulation, and cybercrime can cause significant financial damage. A single AI-generated fake image of an explosion at the Pentagon in May 2023 caused the S&P 500 index to fall in just minutes, causing a USD 500 billion market cap swing, even though the image was quickly proven fake.
Information warfare. Large Language Models (LLMs) can easily be used to generate mass-scale disinformation or phishing campaigns. Equally, deepfake images and videos can be nefariously developed and deployed to defame political candidates and sway results.
The dual use nature of many AI technologies is due to:
Capability generalisation. Large foundation models are trained to be broadly capable, making it difficult to restrict specific harmful applications without degrading legitimate ones.
Openness of research and model weights. Publicly available models can be fine-tuned for harmful purposes without the original developer's knowledge or consent.
Weak access controls. Many powerful AI tools are available with few or no authentication or vetting requirements.
Insufficient red-teaming and pre-deployment evaluation. Harms are regularly discovered after a model's release rather than anticipated beforehand.
Commercial incentivisation. Companies prioritising market reach and rapid deployment over rigorous "know your customer" (KYC) or end-use monitoring.
Openness versus safety. Restricting access to AI tools protects against misuse but limits legitimate research, innovation, and public benefit — especially for under-resourced communities and nations.
Developer responsibility versus user autonomy. To what extent are AI developers morally responsible for how their systems are used once released? If we acknowledge that AI systems can be dual-use, then what ought we do differently in design, development, deployment, monitoring, and so forth? arxiv
Precaution versus progress. Slowing AI development to manage dual-use risks may cede ground to less scrupulous actors domestically or internationally.
Knowledge democratisation versus harm enablement. Large language models may confer easy access to dual-use technologies capable of inflicting great harm, yet restricting them would also limit access to beneficial expertise for those who lack traditional educational pathways. arxiv
Civilian versus military use. AI developed for civilian purposes routinely finds military application. The EU AI Act explicitly excludes military-purpose AI systems from its scope, creating a significant governance gap.
Transparency versus security: Publishing details of AI capabilities and vulnerabilities can simultaneously serve as a roadmap for those who would exploit them.
Mis/disinformation
Privacy/surveillance
Safety
Security
Chinese hackers use Anthropic AI agent to attack foreign entities
AI system manipulated to invent 40,000 biochemical warfare agents
Pentagon uses Anthropic's Claude to capture Venezuela president Maduro
Mass AI cheating uncovered at South Korea's Yonsei university
Ugandan police accused of using facial recognition to stifle Museveni term protests
Singapore Xavier patrol robots fuel surveillance state concerns
You are welcome to use, copy, adapt, and redistribute this definition under a CC BY-SA 4.0 licence.
Let us know if you have any comments or suggestions about how to improve this definition, or would like to suggest and/or contribute additional terms to define.
Author: Charlie Pownall 🔗
Published: May 11, 2026
Last updated: May 11, 2026