Google DeepMind, Royal Free London rapped for patient data sharing
Google DeepMind, Royal Free London rapped for patient data sharing
Occurred: April 2016
Page published: November 2021 | Page last updated: December 2025
The Royal Free London NHS Foundation Trust improperly shared the personal medical records of approximately 1.6 million patients with Google subsidiary DeepMind without adequate legal basis or transparent consent, prompting regulatory action and broader debate over healthcare data governance.
In 2015, the Royal Free London NHS Foundation Trust entered a partnership with Google’s DeepMind to develop and deploy the Streams mobile application, designed to help clinicians detect acute kidney injury more rapidly. As part of this collaboration, the trust transmitted identifiable patient data, including names, NHS numbers, dates of birth and clinical information for roughly 1.6 million patients, to DeepMind for testing and development of the app.
The UK Information Commissioner’s Office investigated following public complaints and media scrutiny, ruling in July 2017 that the data sharing “failed to comply with the Data Protection Act” because patients were not adequately informed and the trust lacked a proper legal basis for processing their records for the app’s development and testing, not solely for direct patient care.
While no fines were imposed at that time, the ICO required the trust to commit to corrective actions, including conducting a privacy impact assessment, commissioning an independent audit of the trial, and establishing a lawful basis for future data processing.
The incident was driven by a "rush to innovate" that bypassed established regulatory safeguards.
Transparency failures: Both the Trust and DeepMind operated with a lack of openness, with the deal only coming to light after an investigation by New Scientist in 2016.
Accountability gaps: DeepMind admitted they "underestimated the complexity of the NHS" and focused almost exclusively on building tools for clinicians while ignoring their accountability to the public.
Corporate ambition: Critics argued that DeepMind sought to "prove" its general AI capabilities and secure a foothold in the lucrative healthcare market by using a "one-way mirror" approach—gaining access to public data without allowing the public to track how that data influenced corporate decision-making or commercial products.
For the patients whose data was shared, the ruling underscored the importance of clear notice, consent and legal frameworks governing the use of sensitive health information. It also raised concerns about patient trust and autonomy in digital health innovations.
For healthcare providers and technology partners, the case served as a cautionary lesson about strict compliance with data protection laws and the necessity of transparent engagement with patients and regulators when integrating AI and analytics into clinical settings.
More broadly, the incident contributed to intensified scrutiny of how tech firms access and use health data globally, influencing debates on data governance, privacy rights, and ethical frameworks for AI in healthcare. It highlighted that even well‑intentioned innovation must operate within robust legal and ethical boundaries to maintain public confidence and protect individual rights.
Streams
Royal Free London (2019). Information Commissioner’s Office (ICO) investigation
Royal Free London (2018). Royal Free London publishes audit into Streams app
Google Deepmind (2018). Scaling Streams with Google
Google Deepmind (2017). The Information Commissioner, the Royal Free, and what we’ve learned
Developer: Google; NHS
Country: UK
Sector: Health
Technology: Prediction algorithm
Purpose: Detect & predict acute kidney disease
Issue: Accountability; Alignment; Privacy; Security; Transparency
May 2016. The New Scientist revealed that Deepmind had failed to secure approval from the Confidentiality Advisory Group of the Medicines and Healthcare Products Regulatory Agency.
July 2017. The UK Information Commissioner's Office ruled that the Royal Free hospital had failed to comply with the UK Data Protection Act when it shared the data, though it did not issue a fine on the basis that there was a lack of guidance for the sector.
September 2021. Law firm Mishcon de Reya announced it was to bring a class action lawsuit against Google on behalf of the 1.6 million individuals whose medical records were shared.
October 2021. DeepMind apologised and stated that it had focused on building tools for clinicians, rather than considering how the project should have been shaped by the needs of patients.
May 2022. The action was later discontinued and resurrected as a legal action against Google for using the NHS data of 1.6 million Britons 'without their knowledge or consent'.
May 2023. Mischon de Reya's lawsuit was again (pdf) dismissed.
Prismall v Google (pdf)
Mishcon (2022). New claim against Google and DeepMind Technologies for unauthorised use of confidential medical records
Litigation Capital Management (2022). Litigation Finance Agreement for new representative claim against Google and DeepMind Technologies
UK Information Commissioner's Office (2017). Royal Free London undertaking (pdf)
UK Information Commissioner's Office (2017). ICO letter to Royal Free London (pdf)
Linklaters (2018). Audit of the acute kidney injury detection system known as Streams (pdf)
Shaping AI - University of Warwick (2023). Shifting AI controversies (pdf)
Powls P., Hodson H. (2017). Google DeepMind and healthcare in an age of algorithms
https://www.digitalhealth.net/2016/11/google-deepmind-and-royal-free-in-five-year-deal/
https://www.insider.com/nhs-discloses-how-much-its-paying-google-deepmind-2017-6
https://uk.news.yahoo.com/uk-class-action-style-suit-150419945.html
https://techcrunch.com/2022/05/16/google-deepmind-nhs-misuse-of-private-data-lawsuit
AIAAIC Repository ID: AIAAIC0105