Crisis Text Line shares users' mental health data with AI company

Occurred: January 2022

Report incident ๐Ÿ”ฅ | Improve page ๐Ÿ’ | Access database ๐Ÿ”ข

US-based mental health non-profit Crisis Text Line (CLT) shared mental health users' confidential data with an AI customer service company, sparking controversy.

Politico discovered customer service company Loris.ai had been using CLT data and insights to develop, market and sell customer service optimisation software, prompting concerns about the non-profit's governance, ethics and transparency, and the commercial nature of the two organisations' relationship.

Volunteers had earlier expressed concerns about CLT's handling of mental health conversations data, with one volunteer starting a Change.org petition pushing CTL โ€œto reform its data ethics.โ€ It also transpired that CLT was a shareholder in Loris.ai and, according to Politico, at one point shared a CEO with the company.ย 

Under pressure from politicians, regulators, privacy experts and mental health practitioners, CLT ended the practice of sharing conversation data with Loris.

Operator: Crisis Text Line

Developer: Crisis Text Line

Country: USA

Sector: NGO/non-profit/social enterprise

Purpose: Provide mental health support

Technology: Chatbot; NLP/text analysis

Issue: Privacy; Confidentiality; Security; Ethics
Transparency: Governance; Marketing

Legal, regulatory ๐Ÿ‘ฉ๐Ÿผโ€โš–๏ธ

Research, advocacy ๐Ÿงฎ

Page info
Type: Incident
Published: February 2022