Investigation: X algorithm amplifies right-wing, extreme content in the UK
Investigation: X algorithm amplifies right-wing, extreme content in the UK
Occurred: May 2025
Page published: October 2025
A nine-month probe shows X’s (formerly Twitter) algorithm disproportionately surfaces right-wing and immigration-focused extremist material to UK users, raising concerns about political distortion and the normalisation of hateful language, and heightening the risk of real-world harms.
Sky News ran a nine-month experiment in the UK using nearly 90,000 posts and simulated British accounts to test what X’s recommendation engine surfaces for new and existing users.
They report the algorithm delivered a majority share of right-wing political content (more than 60 percent in some comparisons), including material with hateful or extreme language and heavy amplification of immigration-focused narratives and certain right-leaning figures.
It also favoured posts from politicians favoured by X owner Elon Musk.
The investigation documents that even accounts seeded with left-leaning signals were steered toward right-wing posts, suggesting amplification is not merely based on popularity but systemic bias.
Sky News identified several factors that contribute to this amplification:
Algorithmic opacity: Key parts of X's algorithm are undisclosed, preventing independent verification and public transparency.
Restricted access for researchers: X has limited academic access to its data API, hampering external oversight and accountability.
Owner influence: Since Elon Musk’s takeover, platform policy shifts and personal preferences seem to influence algorithmic outcomes, with more left-wing voices leaving X or having reduced reach compared to right-wing counterparts.
Algorithmic design: The algorithm heavily weighs engagement and content type but is shown to favor “extreme language” and certain political orientations beyond engagement-driven logic.
For direct users, this means the information they see on X is slanted toward right-wing and extreme viewpoints regardless of their political interests, potentially reshaping their perceptions and public discourse.
Indirectly, such algorithmic bias risks the platform becoming a partisan propaganda tool rather than a space for diverse and balanced dialogue, heightening polarization and undermining public trust and civic debate.
For society, the situation raises urgent questions about the accountability of social media platforms in democratic societies, the impacts of “black box” algorithms on public opinion, and the consequences of concentrated ownership over major online communication channels.
Recommendation system
A recommender system (RecSys), or a recommendation system (sometimes replacing system with terms such as platform, engine, or algorithm) and sometimes only called "the algorithm" or "algorithm", is a subclass of information filtering system that provides suggestions for items that are most pertinent to a particular user.
Source: Wikipedia 🔗
Developer: xAI
Country: UK
Sector: Politics
Purpose: Manipulate public opinion
Technology: Recommendation algorithm; Machine learning
Issue: Accountability; Mis/disinformation; Transparency
AIAAIC Repository ID: AIAAIC2113