Study: Facebook fails to label 42 percent of debunked political misinformation
Study: Facebook fails to label 42 percent of debunked political misinformation
Occurred: October 2020
Page published: September 2024
Meta’s automated content moderation and labeling systems failed to identify and tag 42% of debunked political misinformation during critical election periods, allowing demonstrably false claims to spread without context, thereby eroding voter trust and undermining democratic integrity for the general public.
Researchers at online advocacy group Avaaz discovered that Facebook failed to label 42 percent of debunked political misinformation, thereby helping the spread of false narratives during the 2020 election period.
The researchers calculated that Facebook could have prevented approximately 10.1 billion views on pages that disseminated misinformation had it implemented algorithm changes earlier.
The top 100 pages identified as "repeat misinformers" shared an average of eight confirmed pieces of misinformation each, with many refusing to correct their posts after being flagged by fact-checkers.
Facebook had only intensified its efforts to combat misinformation in the weeks leading up to the election. Prior to this, the platform allowed substantial engagement with misleading content, which included posts that were later found to be false but were not adequately addressed by Facebook's AI-powered content management system.
Facebook disputed Avaaz's methodology, claiming it misrepresented the company's actions against misinformation. A spokesperson argued that the report failed to consider the context of individual pages and their content comprehensively.
Nonetheless, tyhe report was seen to highlight broader concerns about Facebook's role in shaping public discourse and its responsibility in mitigating the spread of false information that can influence democratic processes.
Facebook content moderation system
Developer: Facebook
Country: Global
Sector: Politics
Purpose: Identify, label and reduce the spread of misinformation
Technology: Content moderation system; Machine learning
Issue: Accountability; Accuracy/reliability; Human/civil rights; Transparency
https://www.nytimes.com/2020/11/23/technology/election-misinformation-facebook-twitter.html
https://edition.cnn.com/2020/10/09/tech/facebook-misinformation-loophole-avaaz/index.html
https://www.techtimes.com/articles/253228/20201011/how-to-get-past-facebooks-fact-checking-ai.htm
https://www.cjr.org/the_media_today/disinformation-facebook.php
https://www.nytimes.com/2020/10/14/technology/four-election-related-falsehoods.html
https://www.voanews.com/usa/us-politics/group-says-misinformation-rise-facebook
https://popular.info/p/an-explosion-of-fake-news-on-facebook
AIAAIC Repository ID: AIAAIC0332