Meta automated ad systems enable massive fraud
Meta automated ad systems enable massive fraud
Occurred: January 2024-
Page published: December 2025
Meta’s automated advertising infrastructure generates up to 10 percent of the company's annual revenue from fraudulent and prohibited ads, effectively enabling and monetising a global "scam economy" through deliberate policy choices and algorithmic feedback loops.
Internal Meta documents reviewed by journalists and regulators indicate that Meta projected around 10.1 percent of its annual revenue (roughly USD16 billion) in 2024 would come from ads for scams and banned goods, with about USD 7 billion attributed specifically to “higher risk” scam ads each year.
These documents describe an ecosystem in which Facebook, Instagram, and other Meta properties serve an estimated 15 billion high‑risk scam ads to users every day, many of which promote fraudulent e-commerce schemes, "pig butchering" investment scams, illegal gambling, deepfake celebrity endorsements, and the sale of banned medical products.
Much of this activity is mediated by Meta’s automated ad products (including Advantage+ campaigns) and AI‑driven optimisation, which decide who sees which ads and at what price.
Rather than automatically blocking all suspicious behaviour, Meta’s systems reportedly place suspected fraudsters into a “higher risk” bucket, keep their campaigns live, and charge them penalty rates, thereby turning ongoing fraudulent and invalid traffic into recurring revenue, while also over‑serving these ads to users who previously clicked on scams.
Internal documents suggest Meta executives anticipated regulatory fines of up to USD 1 billion but viewed them as negligible compared to the USD 16 billion in "violating revenue."
The incident reflects structural incentives and accountability gaps in Meta’s ad infrastructure.
Ad review and enforcement rely heavily on automated AI classifiers trained to detect policy violations, with limited proactive human review for high-risk categories. Fraudulent advertisers exploit this by iterating rapidly, using synthetic identities, AI-generated creatives, and subtle variations that evade detection.
Meta’s revenue model sees ads approved and distributed automatically at massive scale, reducing friction for advertisers while externalising risk to users.
Limited transparency about ad approval processes, enforcement effectiveness, and advertiser verification has constrained independent scrutiny and regulatory oversight.
For those directly impacted, Meta's ad system creates and normalises an environment in which scams appear highly personalised, credible, and persistent, increasing the likelihood of financial and psychological harm with little or no recourse. Scam ads often appear indistinguishable from legitimate promotions and benefit from the platform’s implicit credibility.
For society, the incident illustrates how an opaque, profit-optimised AI infrastructure can function as a global fraud-enabling and fraud-benefitting system. It highlights the need for stronger governance of high-risk automated decision-making, including stricter advertiser verification, human oversight for sensitive categories, enforceable transparency obligations for AI-driven advertising platforms, and stronger liability regimes.
Developer: Meta
Country: Global
Sector: Multiple
Purpose: Manage advertising process
Technology: Advertising management system; Machine learning
Issue: Accountability; Alignment; Normalisation; Transparency
https://www.reuters.com/investigations/meta-is-earning-fortune-deluge-fraudulent-ads-documents-show-2025-11-06/
https://www.marketingbrew.com/stories/2025/11/10/meta-scammy-ads-reuters-report
AIAAIC Repository ID: AIAAIC2163