Microsoft Copilot can be turned into automated phishing machine
Occurred: August 2024
Report incident 🔥 | Improve page 💁 | Access database 🔢
Microsoft's AI-powered Copilot chatbot can be exploited by malicious actors for automated phishing and data extraction, according to researchers.
Former Microsoft security architect Michael Bargury demonstrated at the Black Hat USA cybersecurity conference that hackers can use Copilot to analyse communication patterns, mimic a user's writing style, including their emoji usage, and enerate and send hundreds of personalised phishing emails within minutes.
Bargury also demonstrated how specific "magic words" could be used to circumvent Microsoft's existing security controls on Copilot.
While these are proof-of-concept demonstrations, they mirror known techniques for manipulating large language models.
The finding raised questions about the strength of Copilot's security, and the potential harms it poses in terms of individual privacy, corporate confidentiality, financial manipulation and other negative impacts.
System 🤖
Operator: Zenity
Developer: Microsoft
Country: Global
Sector: Technology