Investigative reporter Patrizia Schlosser targeted in deepfake porn attack
Investigative reporter Patrizia Schlosser targeted in deepfake porn attack
Occurred: January 2025
Report incident 🔥 | Improve page 💁 | Access database 🔢
German investigative reporter Patrizia Schlosser was targeted with deepfake pornography images that depicted her in explicit and degrading scenarios on the MrDeepFakes website.
Patrizia Schlosser discovered approximately 30 degrading sexualised images of herself on MrDeepFakes, a notorious deepfake pornographic website known for hosting a vast array of similar content.
Despite knowing the images were fabricated, Schlosser says she expressed deep feelings of shock and shame upon discovering the images and decided to confront the situation by attempting to track down the individual responsible for uploading them.
Working in conjunction with investigative non-profit Bellingcat and German YouTube channel STRG_F, Schlosser found MrDeepFakes hosts tens of thousands of AI videos and images, has close to 650,000 members and its content has been viewed almost two billion times.
It also revealed that the site's administrators use cryptocurrency and PayPal to process payments in an attempt to stay out of the spotlight, perhaps as they appear to be linked to Shenzhen Xinguodu Technology (aka "Nexgo"), a Chinese fintech company listed on the Hong Kong stock exchange.
The incident appears to be linked to Schlosser's investigative work on issues surrounding sexualised violence against women and reflects a broader trend in which women, particularly those in public roles or who report on sensitive topics, are subjected to deepfake attacks.
The AI technology used allows anyone's likeness to be manipulated into explicit content using a single image, making it easy for anyone to concoct extremely unpleasant and damaging content in seconds.
For Schlosser and others impacted, this incident underscores the severe personal and professional ramifications of the misuse of deepfake technology, and raises serious concerns about privacy, consent and the potential for reputational harm in the digital age.
More broadly, it highlights the difficulty in bringing to account the individuals creating malicious deepfake images, the organisations that enable their creation, and those hosting and amplifiying them.
Deepfake
Deepfakes (a portmanteau of 'deep learning' and 'fake') are images, videos, or audio which are edited or generated using artificial intelligence tools, and which may depict real or non-existent people. They are a type of synthetic media.
Source: Wikipedia 🔗
Unknown
Operator: MrDeepFakes
Developer:
Country: Germany
Sector: Media/entertainment/sports/arts
Purpose: Damage reputation
Technology: Deepfake - image
Issue: Accountability; Impersonation; Safety; Transparency
Page info
Type: Incident
Published: January 2025