Fake AI videos show Ukrainian soldiers in "mass surrender"
Fake AI videos show Ukrainian soldiers in "mass surrender"
Occurred: November 2025
Page published: November 2025
A coordinated disinformation campaign circulated AI-generated videos that falsely showed Ukrainian soldiers surrendering in large numbers near Pokrovsk, aiming to undermine morale and portray Ukraine as collapsing.
Multiple AI-generated videos circulated on platforms including TikTok, X, and Telegram, allegedly showing Ukrainian soldiers surrendering en masse near the frontline city of Pokrovsk.
The videos were crafted to appear authentic, but have telltale signs of the use of AI to generate an/or manipulate them: unnatural facial expressions, synthetic voices, and in some cases visual glitches (like stretcher-like objects “floating”).
In one of the AI-generated videos, “soldiers” speak with strange, high-pitched voices, show no real military insignia, or have strangely placed national symbols.
The videos had captions in various languages and were promoted by a network of accounts, suggesting a coordinated campaign designed to amplify Kremlin-aligned narratives globally.
Ukrainian and independent fact-checkers confirmed the videos are deepfakes and bear a Sora watermark, and Ukrainian authorities have denied any mass surrenders, noting that actual fighting around Pokrovsk continues with Ukrainian troops holding their positions.
The primary motive for spreading these fake videos was to erode trust in Ukrainian leadership, frighten Ukrainian families, and influence Western public opinion and support for Ukraine.
The use of AI enables propagandists to create hyperrealistic content at scale and in multiple languages, making disinformation more persuasive and difficult to detect.
Lapses in platform accountability, such as slow removal of deepfake content and weak detection measures, allowed these videos to spread and reach large international audiences before being debunked or taken down.
The incident underscores the growing weaponisation of generative AI for wartime propaganda and the persistent challenges social media companies face in policing harmful synthetic media.
For international audiences: The campaign aims to weaken support for Ukraine by making it appear that the war effort is collapsing. This could influence public opinion and political will in other countries.
For Ukraine's soldiers: The fake videos risk undermining trust in the military’s strength, potentially hurting morale among troops and civilians.
For information integrity: This case highlights how generative AI is becoming a powerful tool for disinformation. It underscores the urgent need for better detection tools, stronger platform policies, and media literacy campaigns.
For global security: As AI-generated propaganda becomes more common, it raises the bar for misinformation in conflict zones. Future wars may increasingly feature synthetic media as a frontline tool.
Developer: OpenAI
Country: Ukraine
Sector: Politics
Purpose: Undermine morale
Technology: Generative AI
Issue: Authenticity/integrity; Mis/disinformation; Transparency
AIAAIC Repository ID: AIAAIC2132