Crisis Text Line shares users' mental health data with AI company
Occurred: January 2022
Report incident 🔥 | Improve page 💁 | Access database 🔢
US-based mental health non-profit Crisis Text Line (CLT) shared mental health users' confidential data with an AI customer service company, sparking controversy.
Politico discovered customer service company Loris.ai had been using CLT data and insights to develop, market and sell customer service optimisation software, prompting concerns about the non-profit's governance, ethics and transparency, and the commercial nature of the two organisations' relationship.
Volunteers had earlier expressed concerns about CLT's handling of mental health conversations data, with one volunteer starting a Change.org petition pushing CTL “to reform its data ethics.” It also transpired that CLT was a shareholder in Loris.ai and, according to Politico, at one point shared a CEO with the company.
Under pressure from politicians, regulators, privacy experts and mental health practitioners, CLT ended the practice of sharing conversation data with Loris.
System 🤖
Documents 📃
Crisis Text Line (2022). On Mental Health, Data, and Why Your Privacy Matters
Crisis Text Line (2022). An Update on Data Privacy, Our Community and Our Service
Danah Boyd (2022). Crisis Text Line, from my perspective
Operator: Crisis Text Line
Developer: Crisis Text Line
Country: USA
Sector: NGO/non-profit/social enterprise
Purpose: Provide mental health support
Technology: Chatbot; NLP/text analysis
Issue: Privacy; Confidentiality; Security; Ethics
Transparency: Governance; Marketing
Legal, regulatory 👩🏼⚖️
Research, advocacy 🧮
Change.org petition (2022). Ask Crisis Text Line to Reform Its Data Ethics