Audio deepfake scam imitates Italian defence minister Guido Crosetto
Audio deepfake scam imitates Italian defence minister Guido Crosetto
Occurred: February 2025-
Report incident 🔥 | Improve page 💁 | Access database 🔢
A sophisticated AI-powered scam that used audio deepfakes to impersonate Italy’s Defence Minister persuaded the country’s business elite to part with significant sums of money.
Scammers used AI to clone Guido Crosetto’s voice and that of his staff, contacting prominent Italian entrepreneurs — including Giorgio Armani, Massimo Moratti, and Patrizio Bertelli — under the pretense of raising funds to secure the release of Italian journalists allegedly held hostage in the Middle East.
The fraudsters spoofed phone numbers to appear as if calls were coming from official government lines, further increasing the scheme’s credibility.
At least one victim, former Inter Milan owner Massimo Moratti, was duped into transferring approximately one million euros to a Hong Kong bank account, believing the Bank of Italy would reimburse him.
Authorities managed to recover the stolen funds, but the incident exposed high-profile individuals to significant financial and reputational risk.
The scam’s success relied on recent real-world events — specifically, the high-profile detention and release of Italian journalist Cecilia Sala in Iran — which lent urgency and plausibility to the ransom narrative.
The use of advanced AI voice-cloning technology made the impersonation highly convincing, while spoofed communications created a false sense of authenticity.
The attackers exploited both technological vulnerabilities and the trust networks among Italy’s elite.
Among the victims were fashion designer Giorgio Armani, former Inter Milan owner Massimo Moratti, and members of the Beretta and Menarini families.
The criminals directed the victims to transfer the funds to a Hong Kong bank account.
For the direct victims, the scam resulted in financial losses, emotional distress, and potential reputational harm, though swift police action mitigated some of the damage.
Indirectly, the incident raised alarm across the business and political communities about the risks posed by audio deepfakes and the need for heightened vigilance and verification protocols.
More broadly, the case underscores how AI-driven impersonation scams can target even the most security-conscious individuals, signaling a new era of cyber-enabled social engineering threats for society at large.
Deepfake
Deepfakes (a portmanteau of 'deep learning' and 'fake') are images, videos, or audio which are edited or generated using artificial intelligence tools, and which may depict real or non-existent people. They are a type of synthetic media.
Source: Wikipedia 🔗
Unknown
Operator:
Developer:
Country: Italy
Sector: Politics
Purpose: Defraud
Technology: Deepfake
Issue: Authenticity; Impersonation; Security
Page info
Type: Incident
Published: May 2025