North Korean hackers use ChatGPT to make deepfake military IDs
North Korean hackers use ChatGPT to make deepfake military IDs
Occurred: July 2025
Page published: September 2025
Suspected North Korean state-sponsored hackers used ChatGPT to generate realistic deepfake military ID images as part of a sophisticated phishing campaign against South Korean defence and research targets.
Hackers from the Kimsuky group bypassed ChatGPT’s safeguards via prompt engineering to produce fake South Korean military identification cards that were attached to spear-phishing emails impersonating official defense institutions.
Sent to journalists, researchers and human rights activists, the emails contained links and attachments with malware designed to extract sensitive data from devices.
While the exact number of victims is unclear, the incident highlights how deepfake documents increase the credibility and impact of phishing attacks, risking personal, organisational, and national security.
The hackers were able to manipulate ChatGPT prompts to generate prohibited government ID imagery, despite built-in restrictions.
North Korean cyber units like Kimsuky are known for evolving tactics to evade international sanctions and obtain intelligence.
The incident demonstrates how esily ChatGPT can be manipulated to produce convincing forgeries, escalating the sophistication of impersonation and phishing threats worldwide.
It raises important questions about ChatGPT's governance and the resilience of its digital infrastructure.
Developer: OpenAI
Country: South Korea
Sector: Govt - defence
Purpose: Create fake military identity images
Technology: Chatbot; Generative AI
Issue: Security
Incident no: AIAAIC2034