Whitebridge AI accused of producing inaccurate, invasive reputation reports
Whitebridge AI accused of producing inaccurate, invasive reputation reports
Occurred: ย September 2025
Page published: October 2025
Report incident๐ฅ| Improve page ๐| Access database ๐ข
A Lithuania-based company was criticised and subjected to a formal complaint for producing and selling AI-powered "reputation reports" on individuals that include invasive and fabricated personal information, without proper consent or transparency.
Whitebridge AI marketed and sold AI-generated reputation reports that compile data scraped from social media, news, and other online resources about anyone with an online presence.ย
These reports contain alleged personality traits, sensitive content flags such as "sexual nudity" or "dangerous political content," and photos, often of low factual quality and containing misinformation.
The company targeted both affected individuals and third parties, selling these reports and denying people free access to their own data, despite legal rights under the European Union's GDPR, leading to anxiety, the exposure of sensitive, potentially inaccurate data, and reputational damage.
Whitebridge AI's business model relies on scraping substantial amounts of personal information and generating reports in order to profit from users seeking to review or correct their own data.ย
Critically, Whitebridge AI claims compliance with data protection laws by appealing to "freedom to conduct a business," but was found to disregard multiple GDPR provisions by not having a valid legal basis for this data processing, ignoring data access requests, and requiring arbitrary "qualified electronic signatures" for data corrections, none of which is mandated by law.ย
Individuals impacted by Whitebridge's reputation reports face emotional distress, invasion of privacy and reputational damage due to false or sensitive claims being made about them.ย
More broadly, the case highlights the urgent need for stronger transparency, accountability and legal enforcement in AI-driven data brokering.
Hallucination (artificial intelligence)
In the field of artificial intelligence (AI), a hallucination or artificial hallucination (also called bullshitting, confabulation or delusion) is a response generated by AI that contains false or misleading information presented as fact.
Source: Wikipedia ๐
Whitebridge AI ๐
Developer: ย
Country: Lithuania
Sector: Business/professional services
Purpose: Develop reputation reports
Technology: Agentic AI; Bot/intelligent agent; Machine learning
Issue: Accountability; Accuracy/reliability; Business model; Mis/disinformation; Privacy/surveillance; Transparency
AIAAIC Repository ID: AIAAIC2057