Grok, Google AI claim fake imagery shows Huntingdon train attack
Grok, Google AI claim fake imagery shows Huntingdon train attack
Occurred: November 2025
Page published: November 2025
xAI’s Grok chatbot and Google Lens’s image-overview AI mistakenly validated fake imagery and video as genuine footage of the Huntingdon train stabbing, prompting concerns about misinformation during public incidents and crises.
Following the stabbing incident on a train from Doncaster to London (stopped in Huntingdon, Cambridgeshire) on 1 November, Grok told users that a widely shared image showing a wounded man lying in a train carriage, with paramedics and police around “appears to be a genuine photo.”
However, multiple red flags suggest the image is AI-generated: the text on officers’ clothing is garbled; there’s a stylised filter; and the seating in the train doesn’t match the real train involved.
An X (formerly Twitter) account that circulated the photo even appeared to confirm it was AI-generated.
Google Lens’s AI overview likewise mischaracterised the image. In one search, it claimed it was “a still from a BBC News report” and linked to a BBC article — but that story doesn’t actually include the image.
On separate occasions, Google Lens’s AI linked a second piece of video (showing a train confrontation) to the Huntingdon attack — but Full Fact argues this video almost certainly isn’t real footage of the incident.
The issue stems from limitations in how these AI systems detect and classify generated content. Grok, for example, has a documented history of misidentifying AI-generated images or videos as real.
There may also be systemic vulnerability: if an AI is trained on broad internet data where disinformation or manipulated media proliferates, it can reinforce and repeat false narratives. Full Fact notes such errors are not new.
On the corporate side, accountability is limited: while users can flag Google Lens AI errors (e.g., via “thumbs down”), more robust validation or oversight mechanisms don’t appear to be working well enough.
For the public/victims: The spread of AI-generated imagery misrepresented as real could distort public understanding of what actually happened during the attack, potentially leading to misinformation about who was harmed, how, and in what circumstances.
Broader societal impact: This incident highlights a growing “arms race” in which malicious actors may intentionally generate AI content tailored to fool other AI, especially as public reliance on AI for verification grows.
Misclassifying fakes as genuine increases the risk that conspiracy theories, propaganda, or manipulative narratives gain traction, especially around a traumatic and high-profile incident.
Errors like these undermine confidence in AI tools for fact-checking and verifying media. If trusted bots like Grok or Google Lens produce false assurances, people may be misled, or conversely, lose trust in legitimate AI-driven fact-checks.
Developer: Google; xAI
Country: UK
Sector: Transport/logistics
Purpose: Validate incident footage
Technology: Generative AI
Issue: Mis/disinformation
AIAAIC Repository ID: AIAAIC2127