Study: Whisper AI transcription service invents medical treatments
Occurred: October 2024
Report incident 🔥 | Improve page 💁 | Access database 🔢
OpenAI's AI-powered transcription tool Whisper is prone to generating fabricated text known as "hallucinations" when transcribing audio, prompting concerns about its use in hospitals and elsewhere.
What happened
Researchers discovered that Whisper is prone to making up chunks of text or even entire sentences, according to AP interviews with more than a dozen software engineers, developers and academic researchers.
The experts said some of the invented text - known in the industry as hallucinations - includes racial commentary, violent rhetoric and even imagined medical treatments.
In one instance, Whisper invented a non-existent medication called “hyperactivated antibiotics.”
OpenAI says Whisper approaches “human level robustness and accuracy.”
Why it happened
The exact cause of Whisper's hallucinations is not fully understood, but software developers suggest that they tend to occur during pauses, background sounds, or when music is playing.
However, the problem persists even in well-recorded, short audio samples, with one study uncovering 187 hallucinations in over 13,000 clear audio snippets.
A University of Michigan researcher conducting a study of public meetings said he found hallucinations in eight out of every 10 audio transcriptions he inspected.
Another study found that nearly 40 percent of the hallucinations found were harmful or concerning because the speaker could be misinterpreted or misrepresented.
What it means
The prevalence of hallucinations in Whisper's transcriptions raises concerns about patient safety risks, in which invented treatments or misinterpreted diagnoses could lead to serious consequences for patients and their care.
Experts, advocates and former OpenAI employees have been calling for better safeguards and verification processes, especially in critical applications like medical transcription. There have also been calls for the US federal government to develop regulations to address these issues.
Hallucination (artificial intelligence)
In the field of artificial intelligence (AI), a hallucination or artificial hallucination (also called bullshitting, confabulation or delusion) is a response generated by AI that contains false or misleading information presented as fact.
Source: Wikipedia 🔗
System 🤖
Documents 📃
Operator: Children's Hospital Los Angeles; Mankato Clinic, Minnesota
Developer: Nabla; OpenAI
Country: USA
Sector: Health
Purpose: Recognise speech; Transcribe speech
Technology: Chatbot; Generative AI; Machine learning; Speech-to-text; Speech recognition
Issue: Accuracy/reliability; Bias/discrimination; Mis/disinformation
News, commentary, analysis 🗞️
https://www.independent.co.uk/news/openai-ap-san-francisco-experts-microsoft-b2635996.html
Page info
Type: Issue
Published: October 2024