Hour One AI 'character' clones accused of allowing misuse

Occurred: August 2021

Report incident πŸ”₯ | Improve page πŸ’ | Access database πŸ”’

An AI system that enables anyone to create a full digital clone of themselves speaking on camera in any language was accused of providing inadequate usage and protection controls.

Tel Aviv-based 'video transformation company' Hour One pays people to deepfake their face so that their 'characters' can be used in promotional, commercial, and educational videos. The company said it has a library of around 100 characters.

Critics pointed out that Hour One's ethics policy provides little information or guidance about how personal clones should be used and misused by its customers - such as for fraud, extortion and other crimes. It also said little about how it would protect the privacy of people whose characters they were synthesising.

Others pointed out that the system posed a threat to voice artists and other creative professionals, whose voices could be appropriated in ways that could jeopardise their reputation and ability to earn income.

System πŸ€–

Documents πŸ“ƒ

Operator: Hour One
Developer: Hour One
Country: Israel
Sector: Business/professional services
Purpose: Market products/services
Technology: Computer vision
Issue: Employment; Ethics/values; Privacy; Security
Transparency: Governance; Marketing

Page info
Type: Incident
Published: August 2021
Last updated: October 2023