Crisis Text Line shares users' mental health data with AI company
Crisis Text Line shares users' mental health data with AI company
Occurred: January 2022
Page published: February 2022
Crisis Text Line (CLT), a non-profit crisis support text service, shared anonymised conversation data with an affiliated for-profit AI company to help train machine learning models, raising criticism about transparency, consent, and the ethical handling of highly sensitive mental health information.
Politico discovered customer service company Loris.ai had been using CLT data and insights to develop, market and sell customer service optimisation software.
It also transpired that CLT was a shareholder in Loris.ai and, according to Politico, the nonprofit and for-profit entities had overlapping leadership and ownership ties. CTL’s terms of service included a lengthy disclosure that users “consented” to data sharing by continuing to use the service.
Volunteers had earlier expressed concerns about CLT's handling of mental health conversations data, with one civil engineer volunteer starting a Change.org petition pushing CTL “to reform its data ethics.”
Under pressure from politicians, regulators, privacy experts and mental health practitioners, CLT ended the practice of sharing conversation data with Loris.
The incident stemmed from an attempt to both leverage large-scale, unique crisis data for social benefit (improving general empathy in conversations) and secure a revenue stream for the non-profit by commercialising the insights through a for-profit entity.
CTL justified the arrangement as a way to leverage its large dataset, which it called "the largest mental health data set in the world," containing millions of anonymised conversation transcripts between people in crisis (experiencing suicidal ideation, abuse, etc.) and volunteer counsellors, to develop machine learning tools that could extend its de-escalation and empathy capabilities beyond the service and potentially generate support for its mission.
However, the shared history, leadership, and financial relationship between the non-profit and the for-profit spinoff created a fundamental conflict of interest, where the ethical obligation to protect vulnerable users was compromised by a commercial objective.
For the individuals directly impacted, notably people sharing highly sensitive mental health information, the incident highlights concerns about how non-profit crisis data can be repurposed for the redevelopment of commercial AI, and whether users truly understand or agree to such use.
For society, it means the necessity of stronger ethical standards, transparency and clear consent practices for mental health data, especially when used in machine learning and AI training contexts. Critics argue that even anonymised datasets derived from deeply personal conversations demand higher privacy protections and accountability from non-profit and private entities alike.
Crisis Text Line (2022). On Mental Health, Data, and Why Your Privacy Matters
Crisis Text Line (2022). An Update on Data Privacy, Our Community and Our Service
Danah Boyd (2022). Crisis Text Line, from my perspective
Developer: Crisis Text Line
Country: USA
Sector: Health; NGO/non-profit/social enterprise
Purpose: Provide mental health support
Technology: Chatbot; NLP/text analysis
Issue: Accountability; Confidentiality; Privacy; Transparency
https://www.politico.com/news/2022/01/28/suicide-hotline-silicon-valley-privacy-debates-00002617
https://www.politico.com/news/2022/01/31/crisis-text-line-ends-data-sharing-00004001
https://www.theverge.com/2022/1/31/22906979/crisis-text-line-loris-ai-epic-privacy-mental-health
https://www.komando.com/security-privacy/suicide-hotline-caught-selling-caller-data/824248/
https://www.popsci.com/technology/crisis-text-line-stops-sharing-data-loris-ai/
AIAAIC Repository ID: AIAAIC0821