Cambridge Analytica uses AI political manipulation to build Donald Trump support
Cambridge Analytica uses AI political manipulation to build Donald Trump support
Occurred: 2016-2025
Page published: May 2024
Cambridge Analytica exploited improperly obtained Facebook user data and AI-based psychological targeting tools to manipulate American voters with tailored political ads, significantly aiding Donald Trump's 2016 presidential campaign.
What happened
Cambridge Analytica harvested data from tens of millions of Facebook users, largely without their knowledge, between 2014 and 2016 and used AI models to build detailed psychographic profiles of U.S. voters using its OCEAN AI-driven personality profiling system.
These profiles enabled highly personalised political advertising during the 2016 presidential campaign, including targeted messages intended to shape emotions, amplify polarisation, and increase support for Donald Trump.
The incident occurred primarily in the United States but relied on data collection practices conducted globally.
Potential harms caused by the campaign included the manipulation of democratic processes, erosion of privacy, voter suppression strategies directed at specific demographic groups, and long-term damage to trust in digital platforms and elections.
Why it happened
The incident stemmed from weak data-protection safeguards at Facebook, opaque third-party app permissions, and Cambridge Analytica’s deliberate misuse of personal data.
A lack of transparency in how AI-based targeting systems work, limited oversight of political advertising technologies, and the absence of meaningful accountability mechanisms allowed the company to weaponise behavioural prediction models without public scrutiny.
Corporate incentives to maximise engagement and political campaign effectiveness exceeded commitments to user privacy and democratic integrity.
What it means
For those directly affected, the incident represented a profound loss of control over personal data and exposure to manipulative political messaging tailored to psychological vulnerabilities.
Indirectly, it signalled how AI-driven political micro-targeting can distort public debate, entrench polarisation, and undermine confidence in electoral outcomes.
For society, it highlighted the urgent need for stronger regulation of data collection, algorithmic transparency, and political advertising, as well as broader safeguards to ensure AI technologies support, rather than subvert, democratic governance.
2013–2014. Cambridge University researcher Aleksandr Kogan’s Facebook app thisisyourdigitallife collects data from millions without consent; data is passed to Cambridge Analytica, despite Facebook rules prohibiting such sharing.
2014–2015. Cambridge Analytica creates machine learning models to classify voters by personality, fears, and persuasion vulnerabilities.
2016. Psychographic targeting used to tailor pro-Trump messaging, test thousands of ads, and identify “deterrence” groups.
Late 2016. Highly personalised ads and emotionally manipulative content flood key voter groups.
2017. Journalists uncover inconsistencies; Facebook quietly requests the delettion of Cambridge Analytica's data but does not verify.
March 2018. Former Cambridge Analytica employee Christopher Wylie reveals scale of unauthourised data harvesting; the mainstream media expose the operation.
March-April 2018. Facebook suspends Cambridge Analytica; regulators launch investigations; Zuckerberg testifies to Congress and EU.
May 2018. Cambridge Analytica and its parent SCL Group shut down amidst scandal and legal pressure.
2019-2022. Facebook/Meta fined billions; new data protection, platform transparency, and political ad rules introduced globally.
November 2025. Meta settled Cambridge Analytica-related claims for USD 190 million
OCEAN
Developer: Cambridge Analytica
Country: USA
Sector: Politics
Purpose: Manipulate public opinion
Technology: Machine learning
Issue: Accountability; Privacy; Transparency
AIAAIC Repository ID: AIAAIC0128