CLIP computer vision system fooled by handwritten notes
Occurred: March 2021
Report incident π₯ | Improve page π | Access database π’
OpenAI researchers discovered that their CLIP computer vision system could be deceived by tools as simple as a pen and paper. The finding highlighted how easily the system could be fooled.
Launched January 2023, CLIP was billed as a state-of-the-art system that explores how AI systems can learn to identify objects without close supervision by training on a database comprising 400 million image-text pairs scraped from the internet.Β
The OpenAI study discovered CLIP includes so-called 'multimodal neurons' which respond to an abstract concept as a word or as a picture, thereby making it more powerful. However, they also potentially opened the system to abuse, such as adding dollar signs over a photograph of a poodle to fool the system into thinking the dog was a piggy bank.
System π€
CLIP π
Documents π
OpenAI. CLIP: Connecting text and images
OpenAI. CLIP code
Radford A. et al (2021). Learning Transferable Visual Models From Natural Language Supervision
OpenAI (2021). Multimodal Neurons in Artificial Neural Networks
News, commentary, analysis ποΈ
Page info
Type: Incident
Published: October 2023