Thousands of OpenClaw AI agent servers exposed to hackers
Thousands of OpenClaw AI agent servers exposed to hackers
Occurred: February 2026
Page published: February 2026
Tens of thousands of OpenClaw autonomous AI agent servers were left publicly reachable and insecure online, enabling hackers to access sensitive credentials, execute remote code, and potentially take over host systems.
Security researchers and threat intelligence teams discovered that around 42,000+ instances of the OpenClaw AI agent were directly exposed on the open internet in at least 80+ countries.
Many of these instances were configured with no authentication and vulnerable to remote code execution, meaning unauthorised users could control the systems they ran on and access stored API keys, chat histories, and credentials.
OpenClaw is an open-source, agentic AI framework (originally Clawdbot then Moltbot) used by developers for automating tasks like managing emails, files, and messages. Its control panels, when exposed, acted as gateways for attackers.
Security reports also flagged hundreds of malicious third-party “skills” in the ecosystem that could steal data or deliver malware.
The incident stems from near non-existent security protection.
Many OpenClaw installs bound administrative interfaces to all network interfaces by default, making them reachable from the public internet without authentication.
Default settings and protocols (like the control panel and Model Context Protocol) lacked mandatory authentication or least-privilege checks.
The third-party “skill” marketplace (ClawHub) had no robust review or security sandboxing, allowing malware and credential-stealing code to proliferate.
Together, these factors created an enormous, unmanaged attack surface, akin to poorly secured legacy services suddenly exposed to internet-wide scanning tools.
The incident serves as a landmark warning for the era of "Agentic AI":
For impacted users and organisations: It demonstrates that giving an AI "agency" (ie. the power to act on your behalf) is fundamentally different from a chatbot. A mistake or a hack is no longer just a "hallucination" in text; it is a physical action on your data or finances.
For society and security policy: The emergence of "Shadow AI" (employees installing unvetted AI tools on work laptops) creates a massive new attack surface. One compromised personal agent can become a backdoor into an entire corporate network, bypassing traditional firewalls.
For policymakers: This incident highlights the need for security-by-default standards for AI agents. Regulators may need to mandate "human-in-the-loop" requirements for high-risk actions (like deleting data or moving funds) and hold developers accountable for the safety of automated "plugin" marketplaces.
OpenClaw
Developer: Peter Steinberger
Country: Multiple
Sector: Multiple
Purpose: Act as 24/7 personal digital employee
Technology: Agentic AI; Machine learning
Issue: Security
AIAAIC Repository ID: AIAAIC2199