MIT scientists' psychopathic AI paints dark future

Occurred: April 2018-June 2018

Researchers at MIT created an AI system nicknamed "Norman" that demonstrated how AI can develop biased and disturbing behaviours based on the data it is trained on.

An image captioning AI trained exclusively on violent and disturbing images from a dark corner of Reddit dedicated to death and gore, Norman's responses were compared to Rorschach inkblot tests with those of a standard AI trained on more typical data.

While the standard AI saw benign images like "a group of birds sitting on a tree branch" or "a black and white photo of a baseball glove," Norman interpreted the same inkblots as "a man is electrocuted and catches to death" or "man is murdered by machine gun in broad daylight".

The MIT team's goal was to demonstrate that AI bias stems from the data used to train it, not inherently from the algorithms themselves. They wanted to highlight the importance of carefully selecting training data to avoid unintended biases in AI systems.

This experiment was seen to raise important questions about the potential dangers of AI systems trained on biased or inappropriate data. It also underscored the need for ethical considerations in AI development and the importance of diverse, representative datasets to create fair and unbiased AI systems.