Koko AI mental health counselling 

Occurred: January 2023

Can you improve this page?
Share your insights with us

Mental health non-profit Koko is in the spotlight for using GPT-3 as a Discord-based 'experiment' to provide support to people seeking counseling and for failing to obtain the informed consent of the 4,000 people using the system. 

Users send direct messages to the Discord 'Kokobot' that asks several multiple-choice questions, and then shares a person's concerns anonymously with someone else on the server who can reply anonymously with a short message - either of their own, or one automatically generated by GPT-3.

According to Koko CEO Rob Morris, 'Messages composed by AI (and supervised by humans) were rated significantly higher than those written by humans on their own (p < .001). Response times went down 50%, to well under a minute … [but] once people learned the messages were co-created by a machine, it didn’t work. Simulated empathy feels weird, empty.'

During the backlash that ensued, critics asked whether an Institutional Review Board (IRB) had approved the experiment. It is illegal to conduct research on human subjects without so-called 'informed consent' unless an IRB finds that consent can be waived in the US.

In response, Morris said the experiment was exempt because participants opted in, their identities were anonymised, and an intermediary evaluated the responses before they were shared with people who sought help. 

Morris told Vice 'We pulled the feature anyway and I wanted to unravel the concern as a thought piece, to help reign in enthusiasm about gpt3 replacing therapists.'

Operator: Koko
Developer: Koko
Country: USA
Sector: Health
Purpose: Provide mental health support
Technology: Large language model (LLM); NLP/text analysis; Neural networks; Deep learning; Machine learning
Issue: Ethics; Privacy
Transparency: Governance

Page info
Type: Incident
Published: January 2023