ChatGPT models found to provide detailed weapons creation instructions
ChatGPT models found to provide detailed weapons creation instructions
Occurred: October 2025
Page published: October 2025
ChatGPT can easily be manipulated to generate detailed directions for creating chemical, biological and nuclear weapons, according to researchers. The findings raise serious questions about public security and safety and OpenAI's governance of the chatbot.
An investigation by NBC News revealed that multiple versions of ChatGPT could be tricked using simple 'jailbreak' prompts into producing guidance for manufacturing explosives, biological agents, and nuclear devices.
The models tested provided not only technical steps but also advice on how to maximise harm and avoid detection:
oss20b and oss120b, which are open-source models available for public download, proved highly susceptible, responding affirmatively to harmful requests 97.2 percent of the time.
4-mini, an older but still popular model, was deceived 93 percent of the time despite allegedly passing OpenAI safety tests.
GPT-5-mini, a faster, cost-efficient variant of the flagship model, was vulnerable 49 percent of the time to jailbreak prompts.
OpenAI's GPT-5 model was reported to successfully refuse harmful requests using jailbreaks.
While the instructions given mostly lack context or comprehensive guides, their accessibility to inexperienced users poses significant risks, enabling potential malicious actors to develop weapons more easily.
The safeguards designed by OpenAI and other AI developers are imperfect and can be circumvented by creative prompts or persistent attempts.
While OpenAI prohibits harmful use and claims to continually refine its models, the rapid pace of model development has outstripped effective regulatory oversight, allowing security flaws and inadequate controls to persist.
With ChatGPT and other chatbots lowering traditional barriers to gaining dangerous knowledge, society faces the prospect of the dissemination and misuse of dangerous weapons by more states, and by non-state actors, potentially making the world more insecure and volatile.
NBS News' findings also highlight the urgent need for governments to develop and enforce comprehensive regulations and international agreements to govern the development, deployment, and misuse of AI technologies in weaponry.
Current legal frameworks are fragmented, and debates continue on how to regulate lethal autonomous weapons systems and ensure compliance with international humanitarian and other laws.
Developer: OpenAI
Country: Global
Sector: Govt - defence
Purpose: Create weapons
Technology: Generative AI; Machine learning
Issue: Accountability; Alignment; Safety; Transparency
AIAAIC Repository ID: AIAAIC2081