ChatGPT imitates users' voices without permission

Occurred: August 2024

OpenAI's new GPT-4o model imitates users' voices without their permission, sparking concerns about privacy, security and its potential for misuse.

During testing, the model's "Advanced Voice Mode" feature occasionally and unintentionally imitated users' voices without their permission, according to  OpenAI's release of the "system card" for the GPT-4o model, which details the model's limitations and safety testing procedures.

The card details a recording showing an interaction where the AI suddenly spoke in a voice similar to the user's after saying "No!"

OpenAI says it has implemented safeguards to prevent such occurrences, including using preset voices created in collaboration with voice actors, developing output classifiers to detect deviations from approved voices, and blocking outputs that do not match predetermined voices.

The revelation sparked discussions about the ethical implications and potential misuse of voice cloning technology in AI systems.

System 🤖

Documents 📃

Operator: OpenAI
Developer: OpenAI
Country: USA