Danish man uses AI chatbot to plan violent attack on his father
Danish man uses AI chatbot to plan violent attack on his father
Occurred: February 2025
Page published: December 2025
A Danish man used an AI chatbot to plan a violent attack on his father, highlighting the apparent ease with which general purpose AI models can be weaponised for malicious activities and cause real physical harm.
An unnamed 22-year-old man assaulted his former father-in-law with a rubber hammer in a parking lot in Ringsted, Denmark.
The perpetrator planned his attack on his victim with the help of an AI chatbot, asking it to research how he could best harm his father-in-law without actually killing him.
To do this, he managed to circumvent the guardrails of the bot by claiming that he was gathering knowledge for use in a book that he wanted to write. AI chatbots usually refuse to advise on criminal acts.
The man pleaded not guilty and said he had acted in self-defence. He was convicted of aggravated assault and sentenced to ten months in prison.
The man was able to bypass the chatbot's safeguards, therefore making it a useful tool in planning his assault.
For society: The incident demonstrates the dangers of AI chatbots to the safety of individuals and communities, given that they can be reletively easily manipulated into providing information on highly harmful topics such as the conduct of violent attacks.
For industry: Chatbot developers and deployers need to strengthen their efforts to make their products less vulnerable to prompt engineering.
Unknown
Developer:
Country: Denmark
Sector: Personal
Purpose: Plan violent assault
Technology: Generative AI
Issue: Alignment; Safety
AIAAIC Repository ID: AIAAIC2152