AI or Not misidentifies Hamas baby victim image as deepfake

Occurred: October 2023

Can you improve this page?
Share your insights with us

AI image detector AI or Not has come under fire for failing to identify a photograph of a baby apparently burned alive by Hamas, raising questions about the tool's accuracy and marketing claims, and fueling doubts about Israel's communications integrity.


California-based Optic's AI or Not tool identified a photograph of what Israel said was a burnt corpse of a baby killed in Hamas’s 2023 attack on Israel as being generated by AI. Its results were tweeted as genuine by US Jewish conservative pundit Ben Shapiro to his hundreds of thousands of followers, prompting people to suggest the Israeli government had deliberately been spreading disinformation. 


Optic claims '95% accuracy' on its website. But deepfake experts such as UC Berkeley professor Henry Farid said the 'Hamas' image showed no sign of being a deepfake. Earlier in 2023, a test carried out by Bellingcat concluded that AI or Not performed 'impressively well' on high-quality, large, and watermarked AI images, but was 'unimpressive' when it came to compressed AI images. 

Databank

Operator: Ben Shapiro
Developer: Optic
Country: Israel; Palestine; USA
Sector: Govt - military; Politics
Technology: Machine learning
Issue: Accuracy/reliability; Mis/disinformation
Transparency: Governance; Marketing

Page info
Type: Incident
Published: October 2023
Last updated: November 2023