LinkedIn trains AI models without user consent

Occurred: September 2024

Report incident πŸ”₯ | Improve page πŸ’ | Access database πŸ”’

LinkedIn appears to have scraped its users' data without informing or gaining their consent to train its own AI models and those of its "affiliates".

Revealed by 404 Media, the move comes in the wake of an updated company privacy policy that explicitly states that user data from the platform will be used to train generative AI and other models developed by LinkedIn, Microsoft and other unnamed "affiliates" and that opting out will not affect any training that has already occurred using a user's personal data or content.

The company provided an opt-out toggle and form in response to the update. LinkedIn says it removes personal data from training sets and does not use data from EU, EEA, or Switzerland residents.

The update resulted in a public backlash, with critics accusing the company of being opaque, deceptive and unethical.Β 

Some people expressed concerns that LinkedIn's claims to use privacy-enhancing technologies may prove inadequate from a security perspective, whilst others argued that users should be compensated for their data being used to train AI models that benefit LinkedIn.

System πŸ€–

Documents πŸ“ƒ

Operator: LinkedIn users
Developer: LinkedIn
Country: Global
Sector: Multiple
Purpose: Train AI models
Technology: Machine learning
Issue: Confidentiality; Ethics/values; Privacy