ChatGPT imitates users' voices without permission

Occurred: August 2024

Report incident ๐Ÿ”ฅ | Improve page ๐Ÿ’ | Access database ๐Ÿ”ข

OpenAI's new GPT-4o model imitates users' voices without their permission, sparking concerns about privacy, security and its potential for misuse.

During testing, the model's "Advanced Voice Mode" feature occasionally and unintentionally imitated users' voices without their permission, according toย  OpenAI's release of the "system card" for the GPT-4o model, which details the model's limitations and safety testing procedures.

The card details a recording showing an interaction where the AI suddenly spoke in a voice similar to the user's after saying "No!"

OpenAI says it has implemented safeguards to prevent such occurrences, including using preset voices created in collaboration with voice actors, developing output classifiers to detect deviations from approved voices, and blocking outputs that do not match predetermined voices.

The revelation sparked discussions about the ethical implications and potential misuse of voice cloning technology in AI systems.

System ๐Ÿค–

Documents ๐Ÿ“ƒ

Operator: OpenAI
Developer: OpenAI
Country: USA
Sector: Multiple
Purpose: Create voices
Technology: Chatbot; Generative AI; Machine learning
Issue: Dual/multi-use; Privacy; Security