Violent spoof Peppa Pig videos bypass YouTube and YouTube Kids filters
Violent spoof Peppa Pig videos bypass YouTube and YouTube Kids filters
Occurred: March 2017
Page published: November 2023 | Last updated: December 2025
Inappropriate knock-offs of Peppa Pig, Nickelodeon's PAW Patrol and other kids' TV shows bypassed YouTube's automated safety filters, frightening young children and disturbing their parents, according to an investigation.
A 2017 BBC investigation discovered hundreds of videos of well-known children's cartoon characters on YouTube and YouTube Kids, including Peppa Pig, PAW Patrol, Doc McStuffins, and Thomas the Tank Engine.
These videos incorporated creepy and disturbing content that passed for real cartoons when viewed by kids, including animated violence and graphic toilet humour, and had not been detected by YouTube's screening software partly as their creators had used animation and keywords targeting children to circumvent it.
Young children were exposed to graphic violence, simulated abuse, medical torture, and fear-inducing imagery that was developmentally inappropriate. Reported effects included nightmares, anxiety, distress, and behavioural changes, with particular risk to very young viewers who lack the cognitive ability to distinguish parody or fiction from reality.
YouTube responded by advising parents to use its YouTube Kids app and turn on 'restricted mode'. It also removed some of the videos flagged by the BBC. Months later, the company introduced a policy that age restricted content deemed an inappropriate use of family cartoon characters on the YouTube main app when flagged.
The incident constituted a massive failure in automated content moderation and a lack of corporate transparency regarding how children's content was vetted.
Algorithmic blindness: YouTube relied almost exclusively on AI classifiers to sort content. These algorithms could recognise the "pixels" of Peppa Pig but could not understand the "context" or "narrative" of the violence taking place.
Financial incentives: YouTube’s recommendation engine prioritised engagement and watch time. Because these videos were weird and high-energy, they kept children staring at screens, which the algorithm interpreted as "successful" content, subsequently rewarding creators with significant ad revenue.
Lack of accountability: Critics argued that YouTube treated its platform as a "neutral" host rather than a publisher with a duty of care. Until the public outcry, there was little proactive human review of the "YouTube Kids" ecosystem, which many parents assumed was a curated, "walled garden."
For children, the breach of trust in a "safe" digital space can have long-lasting effects on their media literacy and emotional stability. For parents, it destroyed the illusion that technology could act as a reliable "digital babysitter."
For society, the incident forced a global conversation on AI safety and corporate liability. It led to significant changes in US law (FTC/COPPA settlements) and forced YouTube to overhaul its moderation policies, eventually hiring thousands of human moderators and disabling comments on millions of children's videos to prevent predatory behaviour. It also proved that AI, without robust human oversight and ethical guardrails, can be easily "gamed" by bad actors to target the most vulnerable members of society for profit.
Elsagate
Elsagate (derived from Elsa and the -gate scandal suffix) is a controversy surrounding videos on YouTube and YouTube Kids that were categorized as "child-friendly", but contained themes inappropriate for children.
Source: Wikipedia 🔗
2014-2015. Anonymous channels begin uploading low-quality, surreal "nursery rhyme" videos. Creators discover that "mashups" of popular characters (e.g., Elsa, Spider-Man, Peppa Pig) trigger YouTube's recommendation engine, ensuring millions of views from toddlers.
May 2015. The Campaign for a Commercial-Free Childhood (CCFC), a coalition of children’s and consumers advocacy groups, complained to the US Federal Trade Commission (FTC) about 'disturbing' and 'harmful' content on YouTube Kids when the channel first launched.
March 2017. The BBC publishes an investigation into the "Peppa Pig problem", highlighting a video where the character is tortured by a dentist.
August 2017. In a (belated) first attempt to curb the trend, YouTube prohibits monetisation for channels that make "inappropriate use of family-friendly characters."
November 2017. Artist James Bridle publishes essay that brings the scale of the problem to light. Google conducts YouTube "purge," deleting over 50 channels and 150,000 videos, and disabling comments on millions of videos involving minors.
April 2018. A coalition of 23 consumer and child advocacy groups files a formal complaint with the US Federal Trade Commission (FTC), alleging that YouTube is profiling and targeting children under 13 without parental consent.
September 2019. The FTC and the New York Attorney General announce a record USD 170 million settlement with Google/YouTube for violating the Children's Online Privacy Protection Act (COPPA).
January 2020. YouTube begins treating all "Made for Kids" content differently: personalised ads, comments, and notification bells are disabled platform-wide to stop the data-tracking of minors.
YouTube content moderation system
Developer: YouTube
Country: USA; UK
Sector: Media/entertainment/sports/arts
Purpose: Moderate content
Technology: Content moderation system; Machine learning
Issue: Accountability; Alignment; Safety; Transparency
Federal Trade Commisson v Google LLC, YouTube LLC
https://theoutline.com/post/1239/youtube-has-a-fake-peppa-pig-problem
https://medium.com/@jamesbridle/something-is-wrong-on-the-internet-c39c471271d2
https://www.nytimes.com/2017/11/04/business/media/youtube-kids-paw-patrol.html
https://www.itv.com/news/2017-11-10/youtube-moves-to-restrict-inappropriate-content-from-kids-app
https://www.cbc.ca/news/science/childrens-videos-filters-1.4412422
https://www.polygon.com/2017/12/8/16737556/youtube-kids-video-inappropriate-superhero-disney
AIAAIC Repository ID: AIAAIC0109