Facebook fails to manage Christchurch mosque shooting livestreaming
Facebook fails to manage Christchurch mosque shooting livestreaming
Occurred: March 2019
Page published: September 2024
Facebook failure to control the spread of videos of the livestreaming of the 2019 mass shootings in Christchurch, New Zealand, caused profound psychological trauma to victims' families and the global Muslim community and highlighted the catastrophic failure of AI-powered content moderation systems to prevent the dissemination of extreme violence.
Australian gunman Brenton Tarrant attacked the Al Noor and Linwood mosques in Christchurch, New Zealand, during Friday prayers, livestreaming the massacre using a GoPro camera and a smartphone app called LIVE4.
The stream lasted 17 minutes, during which 51 people were killed and 40 others were injured.
Despite the graphic nature of the content, Facebook’s automated systems failed to flag the video; it was only removed after the New Zealand Police alerted the company.
In the 24 hours that followed, the video was re-uploaded 1.5 million times across Facebook and other platforms, as bad actors used "hashes" and visual edits to bypass automated filters.
The graphic nature of the livestream, which depicted the mass murder from a first-person perspective, shocked viewers and caused widespread anxiety and trauma. It prompted widespread heated criticism of the companies for their inability to effectively manage and remove such content
Following the attack, there was a noted increase in hate crimes against Muslim communities in a number of countries. The fear of reprisals or copycat attacks also became a significant concern for many communities, particularly those targeted by the shooter’s ideology.
Facebook's AI content moderation systems were not equipped to detect the footage in real time. Facebook said it struggled to identify the video because of the use of a head-mounted camera, which made it harder for its systems to automatically detect the nature of the content. A first-person shooter perspective "was a type of video we had not seen before," Facebook's public policy director told British lawmakers.
A Facebook executive acknowledged that "our algorithms are having to learn literally on the fly the second the incident happens without having the benefit of lots and lots of training data on which to have learned."
Critics pointed to a deeper accountability problem. The Association of New Zealand Advertisers questioned: "If site owners can target consumers with advertising in microseconds, why can't the same technology be applied to prevent this kind of content being streamed live?" Analysts also noted that tech companies had long prioritised copyright infringement over harmful and dangerous content in their moderation systems, reflecting misaligned corporate priorities.
Facebook came under particular fire for acting too slowly and not having appropriate measures in place to prevent the mass sharing of the shooter's livestreamed video.
There were also transparency failures: Facebook was criticised for not reporting crimes of this nature to police, with the UK's counter-terrorism chief telling a parliamentary committee that social companies do not report incidents that are clearly breaking the law.
The incident highlighted concerns about how social media platforms handle violent content and the effectiveness of their increasingly AI-powered content moderation systems. Critics argued that Facebook and other firms must improve their ability to detect and remove extremist material and take more responsibility for it on their services.
In response, New Zealand and other countries began considering stricter regulations for content providers, including potential fines and imprisonment for executives who fail to remove violent imagery promptly.
Facebook later introduced a "one-strike" policy for Live, shared 800+ "hashes" (digital fingerprints) with the Global Internet Forum to Counter Terrorism (GIFCT), and retrained AI on body-cam footage.
Facebook content moderation system
Developer: Facebook
Country: New Zealand
Sector: Religion; Politics
Purpose: Moderate content
Technology: Content moderation system; Machine learning
Issue: Accountability; Accuracy/reliability; Mis/disinformation; Safety; Transparency
March 15, 2019. Shooter posts links to his manifesto and Facebook profile on 8chan and Twitter, and then attacks the Al Noor Mosque.
March 15, 2019 (13:57). The 17-minute livestream ends.
March 15, 2019 (14:09). Facebook receives its first user report on the video.
March 15, 2019 (post-attack). New Zealand Police alert Facebook, and the video is removed.
March 17, 2019. Facebook announces it removed 1.5 million versions of the video in 24 hours.
May 15, 2019. Adoption of the "Christchurch Call to Action" by world leaders and tech giants.
October 2021. Leaked documents reveal Facebook significantly retrained its AI content moderation systems, reducing detection time for similar events from minutes to seconds.
https://www.nzherald.co.nz/business/news/article.cfm?c_id=3&objectid=12217454
https://www.wired.com/story/christchurch-shooter-youtube-radicalization-extremism/
https://www.nytimes.com/2019/03/15/technology/facebook-youtube-christchurch-shooting.html
https://time.com/5589478/facebook-livestream-rules-new-zealand-christchurch-attack/
AIAAIC Repository ID: AIAAIC0217