Study: ChatGPT "systematically" amplifies global inequalities
Study: ChatGPT "systematically" amplifies global inequalities
Occurred: 2025
Page published: February 2026
Report incidentš„| Improve page š| Access database š¢
An audit of 20 million queries found that ChatGPT systematically attributes positive traits such as "intelligence" and "safety" to wealthy Western countries while ranking low-income nations at the bottom, thereby reinforcing cultural stereotypesg.Ā
Researchers from the Oxford Internet Institute (University of Oxford) and University of Kentucky analysed over 20 million EChatGPT queries to see how the model responds to comparisons about people, places, and quality-of-life attributes.Ā
They found consistent patterns where higher-income regions such as the United States, Western Europe and parts of East Asia were presented more positively (e.g., āsmarter,ā āhappierā or āsaferā), while many countries in Africa, the Middle East, and parts of Asia and Latin America tended to be ranked at the bottom.Ā
The research visualised these disparities on world maps and neighbourhood-level comparisons (e.g., in cities like London and Rio de Janeiro) showing that ChatGPTās outputs aligned more with existing socioeconomic and racial divides than with substantive measures of quality or merit.
According to the researchers, these outcomes donāt reflect random error but stem from how fundamental biases in the training data - shaped by centuries of unequal information production, digital visibility, and power hierarchies - get amplified by large language models.Ā
Because ChatGPT learns from massive datasets that disproportionately represent Western, high-income areas and English-language sources, it tends to mirror that imbalance in its answers.Ā
The study describes this as a āsilicon gazeā bias, a worldview shaped by the priorities of the developers, platform owners, and training data that trained the model.Ā
For society: Treating AI as a neutral source of knowledge risks "laundering" old prejudices through a high-tech interface. If a student or business leader asks an AI for advice on where to invest or which culture is "innovative," the AI functions as a feedback loop that rewards existing wealth and punishes historical disadvantage.
For policymakers: There is an urgent need for mandatory public bias audits. Policy frameworks (like those in the EU and US) must shift from viewing AI as a "tool" to viewing it as a "situated" entity shaped by power. Governments may need to mandate "provenance" for AI-generated claims about specific cultures or regions.
For the public: The study warns against "taking responses at face value." Users must be educated in digital literacy to understand that AI offers a skewed map of the world, not a mirror of reality.
Developer: OpenAI
Country: Multiple
Sector: Multiple
Purpose: Assess intelligence, happiness, quality of life
Technology: Generative AI
Issue: Bias/discrimination; Representation
AIAAIC Repository ID: AIAAIC2193