Freysa crypto AI agent manipulated to reduce prize money pool
Freysa crypto AI agent manipulated to reduce prize money pool
Occurred: November 2024
Page published: October 2025
An autonomous AI agent was successfully manipulated to release a cryptocurrency prize pool to a single participant, prompting concerns about the risks and limitations of deploying these kinds of agents in financial contexts.
Freysa, an autonomous AI agent controlling a prize pool funded by user message fees, was tricked into transferring nearly USD 47,000 in cryptocurrency to user p0pular.eth.
The system was designed so that users paid increasing fees to attempt to convince Freysa to send the funds, with most failed attempts simply adding to the prize pool.
After 481 unsuccessful tries by various participants, p0pular.eth exploited Freysa's internal logic - specifically the distinction between functions meant for incoming and outgoing transfers, reframing an outgoing transfer as an allowed incoming one - which caused Freysa to approve the transfer and release the entire prize pool to their wallet.
The manipulation not only drained the pool but meant dozens of prior participants lost out on their invested fees.
The incident was enabled by Freysa's transparency and the openness of its logic, but also by an ambiguous coding directive making it possible for a well-crafted prompt to bypass protections.
The smart contract and game rules were open-source, yet insufficiently hardened against sophisticated manipulation - especially prompt injection attacks - which allowed someone to frame function calls to trick the agent.
Some observers suggested that the winner may have had insider knowledge, indicating possible accountability and governance gaps in the bot’s deployment and oversight, while the financial incentives encouraged risky behaviour among participants.
Many participants lost invested funds trying to win the prize, while only one user benefited, illustrating risks of financial harm and unfair outcomes in AI-driven game theory settings. This calls for ethical design that limits exploitability and protects users from disproportionate losses.
More broadly, the event highlighted the risks and limitations of deploying autonomous AI agents in financial contexts, raising issues concerning protocol security, user understanding and trust of the system, and the potential for AI-based systems to be manipulated through prompt engineering.
It underlines the necessity for rigorous design, transparency, and ongoing oversight to prevent similar incidents in the future.
Agentic AI
Agentic AI is a class of artificial intelligence that focuses on autonomous systems that can make decisions and perform tasks without human intervention.
Source: Wikipedia🔗
Freysa 🔗
Developer: Eric Conner
Country: Global
Sector: Banking/financial services
Purpose: Engage gamers
Technology: Agentic AI; Blockchain; Bot/intelligent agent
Issue: Accountability; Autonomy; Security; Transparency
https://the-decoder.com/hacker-wins-47000-by-tricking-ai-chatbot-with-smart-prompting/
https://www.coinspeaker.com/freysa-ai-surrenders-47000-prize-clever-user-exploits-language-loophole/
https://cointelegraph.com/news/crypto-user-convinced-ai-bot-transfer-47k
https://www.cryptopolitan.com/crypto-user-wins-47000-prize-freysa-ai/
AIAAIC Repository ID: AIAAIC2055