ChatGPT invents breast cancer screening advice responses
ChatGPT invents breast cancer screening advice responses
Occurred: April 2023
Report incident 🔥 | Improve page 💁 | Access database 🔢
ChatGPT has been found to make up fake information when asked about breast cancer screening, prompting doctors to warn users not to use the chatbot for medical advice.
University of Maryland School of Medicine researchers asked ChatGPT to answer 25 questions related to advice on getting screened for breast cancer, with each question asked three separate times and the results analysed by radiologists trained in mammography.
According to the researchers, ChatGPT answered one in ten questions about breast cancer screening wrongly, some of which were ‘inaccurate or even fictitious'.
They also discovered that correct answers generated by the bot were not as ‘comprehensive’ as those found through a simple Google search.
The findings highlighted concerns about ChatGPT's tendency to 'hallucinate' false information, and prompted medical experts to warn users to avoid using the system for medical advice.
Hallucination (artificial intelligence)
In the field of artificial intelligence (AI), a hallucination or artificial hallucination (also called bullshitting, confabulation or delusion) is a response generated by AI that contains false or misleading information presented as fact.
Source: Wikipedia 🔗
Operator: Hana L. Haver, Emily B. Ambinder, Manisha Bahl, Eniola T. Oluyemi, Jean Jeudy, Paul H. Yi
Developer: OpenAI
Country: USA
Sector: Health
Purpose: Provide cancer screening advice
Technology: Chatbot; Generative AI; Machine learning
Issue: Accuracy/reliability; Mis/disinformation
Haver H. L., Ambinder E.B., Bahl M., Oluyemi E.T., Jeudy J., Yi P.H. (2023). Appropriateness of Breast Cancer Prevention and Screening Recommendations Provided by ChatGPT
Page info
Type: Issue
Published: December 2023