AI nudification bots swamp Telegram
AI nudification bots swamp Telegram
Occurred: October 2024
Page published: October 2024
Telegram saw a surge in the availability and use of AI-powered bots that create nude images of individuals, including minors, raising concerns about privacy and the physical and psychological safety of victims, according to multiple investigations.
A WIRED investigation discovered over 50 bots generating non-consensual fake nude images of women from regular photos, and drawing an estimated 4 million monthly users combined.
An earlier BalkanInsight (BIRN) investigation reported that tens of thousands of Telegram users in Serbia were sharing images of women ‘undressed’ by artificial intelligence. One such channel - for the ClothOff denudification app - had over 535,000 subscribers.
A February 2025 investigation by Nucleo found 23 Telegram bots actively creating AI-generated child sexual abuse material (CSAM), challenging the company's promises to crack down on such criminal content.
The victims often experience severe emotional distress, humiliation, fear and long-term psychological trauma due to the non-consensual nature of the images.
In some cases, the images are used to extort money.
Telegram's platform supports a multitude of channels and bots that facilitate the sharing and creation of these images, with users incentivised through gamification elements in which they earn rewards for creating and sharing deepfake images.
Telegram reportedly does little or nothing to stop these channels or their use.
Meantime, many legal jurisdictions lack specific regulations addressing altered images created by AI, leaving victims vulnerable to exploitation without legal recourse.
The findings raise significant concerns about the implications for privacy and the psychological well-being of victims, particularly young girls and women.
The situation highlights the immediate need for up-to-date legislation that addresses the unique challenges posed by AI-generated content.
While some initiatives have been proposed, such as the US Deepfake Accountability Act, most have not been signed into law and of those that have, questions about their effectiveness remain.
Developer: Alaiksandr Babichau, Alexander German, Dasha Babicheva, Yevhen Bondarenko
Country: Croatia; Kosovo; Montenegro; Serbia; USA
Sector: Media/entertainment/sports/arts
Purpose: Nudify women
Technology: Deepfake; Machine learning
Issue: Accountability; Consent; Privacy/surveillance; Safety; Transparency
April–July 2020. Sensity AI researcher Henry Ajder identifies one of the first Telegram nudification bots; it has already produced over 100,000 images including of minors; the bot ecosystem grows 198% in this period alone.
February 2024. Study identifies Telegram as the single most popular messaging app used by child sexual abuse offenders to search for, view, and share CSAM.
June 2024. BalkanInsight investigation reports that tens of thousands of Telegram users in Serbia are sharing images of women ‘undressed’ by AI.
August 2024. French authorities formally accuse Telegram of failing to cooperate with child exploitation investigations. Telegram CEO Pavel Durov is arrested; Telegram promises greater cooperation with law enforcement.
October 2024. WIRED publishes investigation identifying at least 50 nudify bots with more than 4 million combined monthly users; Telegram removes 75 bots and channels identified in the report after journalist contact.
February 2025. Nucleo finds 23 Telegram bots actively creating AI-generated child sexual abuse material.
AIAAIC Repository ID: AIAAIC1774