The growing sophistication of IBM's Project Debater, an autonomous AI system that can debate competitively with humans, prompted concerns about the extent to which it could be manipulated in a real-world setting.
IBM researchers working on the project published a paper in Nature magazine in which they showed that Project Debater is capable of beating humans in debates. In a series of tests, the system was given 15 minutes to research topics and prepare for debates. Humans won most debates, but in one instance it was able to change the stance of nine people.
When tasked with a debate, Project Debater scans the internet for prior research, or quote well-known phrases used by people respected in the field of argument. It also uses IBM's Watson system to listen to the arguments given by opponents and then searches for rebuttals that have been given by others to similar claims.
In a Nature magazine editorial, University of Dundee's Chris Reed made the case for stronger oversight and greater transparency of language models such as Project Debater and OpenAI's GPT-3 in order to reduce the potential for manipulation and harm.