AI/automation ethics glossary
Autonomous weapons
AI/automation ethics glossary
Autonomous weapons
Autonomous weapons (or Lethal autonomous weapons or 'LAWS') refers to the design, development, and deployment of lethal weapons systems that can select and engage military and other human and non-human targets with little or no human control.
Autonomous weapons exist on a spectrum from semi-autonomous systems (where a human approves final action) to fully autonomous systems that operate from target identification to lethal engagement without any human input.
At the centre of this debate is the challenge of human control; in particular, the concept of "meaningful human control" (MHC), which has become a defining feature in LAWS regulation, though as a term it presents significant hurdles due to its vagueness, lack of established practice, and absence of global consensus.
New fully autonomous (unmanned) weapons systems in development remove or reduce the human role in the control and decision-making process, with the goal of removing humans from the active battlefield en masse. The Pentagon's Replicator programme, for example, promises a drastic shift in warfare towards highly autonomous and cooperative AI units.
Autonomous weapons challenge the most fundamental ethical and legal norms governing the use of lethal force.
International humanitarian law (IHL) assumes human judgement as the locus of responsibility. Article 36 of Additional Protocol I to the Geneva Conventions implicitly presumes human oversight in weapons deployment, while the doctrine of mens rea in war crimes jurisprudence necessitates conscious intent.
Autonomous weapons dissolve these foundations by introducing systems that process data without contextual moral comprehension.
At the heart of the debate lies the question of moral responsibility: if a machine independently makes a decision to kill, who is accountable — the programmer, the commander, the manufacturer, or the state?
Beyond individual conflicts, because highly capable LAWS lower the human costs associated with conflict initiation and escalation, they create a large risk to global geopolitical stability - and this effect worsens the more capable the systems become.
These systems challenge the principle of human dignity - the idea that a person should not be killed by a machine that cannot understand the value of life. Furthermore, they risk lowering the threshold for entering into war by reducing the physical risk to a nation's own soldiers.
xx
Privacy/surveillance
Safety
You are welcome to use, copy, adapt, and redistribute this definition under a CC BY-SA 4.0 licence.
Let us know if you have any comments or suggestions about how to improve this definition, or would like to suggest and/or contribute additional terms to define.
Author: Charlie Pownall 🔗
Published: May 11, 2026
Last updated: May 11, 2026