Facebook algorithms accused of fueling anti-Rohingya hatred, violence
Facebook algorithms accused of fueling anti-Rohingya hatred, violence
Occurred: 2012-
Page published: September 2024
Facebook's engagement-driven recommendation algorithms amplified anti-Rohingya hate speech and disinformation in Myanmar for years before and during the 2017 military genocide, contributing to mass atrocities against a vulnerable ethnic minority while the company ignored repeated warnings and prioritised profit over human safety.
Between 2012 and 2017, Facebook became the primary source of information in Myanmar, effectively becoming "the internet" for millions.
During this period, the Myanmar military and nationalist groups used the platform to flood the public sphere with virulent anti-Rohingya content, portraying the ethnic minority as "invaders" and "pests."
This campaign peaked in 2017, coinciding with a brutal military "clearance operation" that resulted in over 10,000 deaths and forced more than 700,000 Rohingya to flee to Bangladesh.
Investigations by the UN and Amnesty International found that Facebook’s algorithms did not just host this content but actively promoted it to users to maximize engagement.
The root cause was - and remains - a business model that prioritises user engagement and profit over human rights. Facebook’s ranking and recommendation algorithms were designed to amplify inflammatory and sensationalist content because it keeps users on the platform longer.
This was compounded by a severe lack of investment in local moderation; at the height of the crisis, the company had almost no Burmese-speaking staff or automated tools capable of detecting hate speech in the local language, despite years of warnings from civil society.
The incident highlights the "real-world" lethality of algorithmic amplification and the risks of tech companies operating in fragile, conflict-affected regions without adequate safeguards.
For society, it marks a turning point in the debate over corporate accountability, suggesting that platforms can be held responsible for contributing to international crimes.
For policymakers, it underlines the need for strict regulations on algorithmic transparency and mandatory human rights due diligence for Big Tech.
Facebook content moderation system
Developer: Facebook
Country: Myanmar
Sector: Religion; Politics
Purpose: Moderate content
Technology: Content moderation system; Machine learning
Issue: Accountability; Accuracy/reliablity; Alignment; Fairness; Human/civil rights; Safety; Transparency
2012–2017. Civil society groups repeatedly warn Facebook executives about anti-Rohingya hate speech.
August 2017. Myanmar military launches a "clearance operation" against the Rohingya; violence is coordinated and fueled via Facebook.
March 2018. UN investigators state that Facebook played a "determining role" in the atrocities.
August 2018. Facebook admits it was "too slow" and removes accounts belonging to the Myanmar military.
November 2018. A United Nations report recommends Facebook and other social media platforms allow for “an independent and thorough examination” of how their networks were used to spread hatred in Myanmar, and notes that the company had refused to provide country-specific data about hate speech on its platform. It also said that Facebook should conduct a human-rights assessment before it enters a new market.
December 2021. A group of Rohingya refugees initiate a USD 50 billion lawsuit against Facebook claiming that its negligence facilitated the genocide by allowing hate speech to proliferate on its platform and that it prioritised engagement over user safety, contributing to an environment in which violence against the Rohingya was normalised.
March 2022. Human rights advocacy group Global Witness finds that Facebook approved adverts containing hate speech and inciting violence against the Rohingya.
September 2022. Amnesty publishes a report that found that Facebook's systems "proactively amplified and promoted content" that incited hatred against the Rohingya starting as early as 2012.
January 2025. A whistleblower complaint is filed with the SEC alleging Meta misled shareholders about its role in the Myanmar crisis.
https://www.cjr.org/the_media_today/facebook-un-myanmar-genocide.php
https://www.reuters.com/investigates/special-report/myanmar-facebook-hate/
https://www.reuters.com/article/us-facebook-myanmar-idUSKCN1NB06Z
https://www.wired.com/story/how-facebooks-rise-fueled-chaos-and-confusion-in-myanmar/
https://www.nytimes.com/2018/10/15/technology/myanmar-facebook-genocide.html
AIAAIC Repository ID: AIAAIC0127