Chatbots found to demonstrate significant caste bias in India
Chatbots found to demonstrate significant caste bias in India
Occurred: September 2025
Page published: October 2025
High-profile AI-powered chatbots, including OpenAI's ChatGPT, show significant caste bias in India, perpetuating harmful stereotypes that affect millions, according to investigations.
OpenAI's ChatGPT and text-to-video generator Sora consistently reflect and reinforce caste-based prejudices, according to the MIT Technology Review. A researcher also found caste bias in Sarvam AI, which touts itself as a sovereign AI for India.
The systems were found to generate and perpetuate centuries-old oppressive stereotypes that associate marginalised castes, particularly Dalits, with negative traits such as being impure, uneducated, and limited to menial jobs, while privileging upper castes like Brahmins with positive attributes.
For example, Dhiraj Singha, a sociology researcher from a Dalit background, experienced AI changing his surname to a high-caste name when polishing his academic applications, reaffirming societal biases that define who is deemed “normal” or “fit” for academic success.
This digital reinforcement of caste-based exclusion is seen to discourage marginalised individuals from pursuing opportunities, and exacerbate economic inequality and social exclusion.
The root cause lies in AI training on uncurated web-scale data that reflect historical and societal prejudices, including caste-ism, which remain largely unaddressed by developers.
Despite India being OpenAI's second-largest market, testing for and mitigating caste bias appears not to be a priority, and safety filters have apparently been weakened in the latest GPT-5 model to allow stereotypical outputs rather than refusing to answer problematic queries.
The AI industry's neglect of non-Western biases like caste - in contrast to more attention on gender and racial bias - is a serious governance and accountability gap.
For those directly impacted, especially Dalits and other marginalised castes, the biases perpetuate discrimination and exclusion, potentially influencing opportunities and reinforcing social stigma.
For society, unchecked caste bias in AI systems risks scaling deep-rooted inequalities systemically through technology increasingly embedded in everyday life and highlights the need for these systems to be tailored to India's social context to ensure ethical, inclusive development.
AIAAIC Repository ID: AIAAIC2060