Study: Social work AI transcription tools wrongly indicate suicidal ideation
Study: Social work AI transcription tools wrongly indicate suicidal ideation
Occurred: 2024-
Page published: February 2026
AI transcription tools used by UK local authorities have been found to "hallucinate" false reports of suicidal ideation and "gibberish" in official social care records, potentially leading to life-altering incorrect decisions about child protection and adult care.
In an eight-month study across local councils in England and Scotland, Ada Lovelace Institute researchers found that AI-powered transcription tools deployed in social work settings in 58 councils across the UK regularly generate “hallucinations” (fabricated or incorrect text) when transcribing and summarising conversations between social workers and clients.
One social worker reported that an AI tool falsely recorded suicidal ideation that the client never actually expressed. Other errors included nonsensical words or phrases in place of actual spoken content, such as “fishfingers" for "parents flighting".
If left uncorrected, these AI-generated transcripts could influence future decision-making about care plans, safeguarding, or risk assessments, and unduly harm patients and their families.
The incident stems from the systemic prioritisation of efficiency over accuracy by the local councils, combined with technical limitations of Large Language Models (LLMs).
LLMs are designed to predict the next word in a sequence, not to verify facts. In high-stakes environments like social work, the AI often "fills in the gaps" with stereotypes or formal, "academic" language that misrepresents the person-centered nature of the actual conversation.
Many local authorities implemented these tools without rigorous evaluation of their impact on social care service users, focusing instead on Return on Investment and efficiency in terms of increasing the number of home visits social workers could perform per day.
Some practitioners received as little as one hour of training. This led to inconsistent review practices, with some workers spending only minutes proofreading AI outputs before finalising legal documents.
For individuals and families, inaccurate records can lead to wrongful risk labelling (for example, being documented as suicidal), which may trigger intrusive interventions, stigma, or mistrust in social services, while also risking that real concerns are missed when transcripts are nonsensical or misleading.
For social workers, over‑reliance on AI threatens core reflective practices in writing and reviewing notes, and exposes them to disciplinary or legal risk for errors generated by systems they do not control or fully understand.
At a societal level, deploying fallible AI into highly sensitive child and adult protection work without robust testing, transparency, and oversight undermines public confidence in both AI and welfare institutions.
For policymakers, the incident highlights the need for regulation and guidance that treats AI transcription in social care as safety‑critical: mandating human review, transparency, auditable use, bias and error monitoring, and clear lines of accountability for vendors and public bodies when automated documentation harms people.
Developer: Beam; Microsoft
Country: UK
Sector: Govt - welfare
Purpose: Transcribe and summarise meetings and conversations
Technology: Generative AI; Speech-to-text; Speech recognition
Issue: Accountability; Accuracy/reliability; Automation bias; Bias/discrimination; Transparency
Ada Lovelace Institute. Scribe and prejudice? Exploring the use of AI transcription tools in social care
AIAAIC Repository ID: AIAAIC2195