CLIP computer vision system fooled by handwritten notes
OpenAI researchers discovered that their CLIP computer vision system could be deceived by tools as simple as a pen and paper. The finding highlighted how easily the system could be fooled.
Launched January 2023, CLIP was billed as a state-of-the-art system that explores how AI systems can learn to identify objects without close supervision by training on a database comprising 400 million image-text pairs scraped from the internet.
The OpenAI study discovered CLIP includes so-called 'multimodal neurons' which respond to an abstract concept as a word or as a picture, thereby making it more powerful. However, they also potentially opened the system to abuse, such as adding dollar signs over a photograph of a poodle to fool the system into thinking the dog was a piggy bank.
System
OpenAI. CLIP: Connecting text and images
OpenAI. CLIP code
Radford A. et al (2021). Learning Transferable Visual Models From Natural Language Supervision
OpenAI (2021). Multimodal Neurons in Artificial Neural Networks
News, commentary, analysis
Page info
Type: Incident
Published: October 2023