A recently patched vulnerability in OpenClaw, a rapidly growing open-source AI agent tool, demonstrates the necessity of structural security controls when deploying agentic AI. The issue, responsibly disclosed by Oasis Security, allowed unauthorized websites to gain control of a developer's AI agent without requiring browser extensions or direct interaction. The vulnerability originated from OpenClaw's local gateway implicitly trusting all connections from the host machine, including unauthorized WebSocket requests initiated by browser sessions.
High-severity vulnerability and remediation
The OpenClaw maintainers classified the vulnerability as high severity and released a patch within 24 hours of receiving the disclosure from Oasis Security. Oasis researchers advised organizations to apply the fix immediately: "The fix for this vulnerability is included in version 2026.2.25 and later. Ensure all instances are updated — treat this with the same urgency as any critical security patch."
OpenClaw, previously known as MoltBot and Clawdbot, operates locally as a personal AI assistant. It integrates with messaging applications and developer tools, enabling users to automate workflows, manage files, execute shell commands, and perform various autonomous actions. Developers can extend its capabilities through community-built plugins, called "skills," available on the ClawHub marketplace. This flexibility and local execution model have driven significant adoption; within three months of its launch, OpenClaw became the most starred project on GitHub, surpassing the React JavaScript library.
Expanded security considerations
The rapid adoption of OpenClaw has prompted increased security research, identifying vulnerabilities such as CVE-2026-25253, which allowed unauthorized access to authentication tokens. Researchers have also documented issues involving command formatting and prompt processing, including CVE-2026-24763, CVE-2026-25157, and CVE-2026-25475.
The community marketplace presents additional supply chain considerations. Researchers at Koi Security identified that over 820 of the 10,700 skills available on ClawHub contained unsafe or unauthorized code, a significant increase from early February. Trend Micro also observed threat actors utilizing 39 specific skills across ClawHub and SkillsMP to distribute the Atomic macOS information stealer.
The vulnerability identified by Oasis Security centered on the gateway's trust model. OpenClaw assumed that any connection originating from the local host machine was inherently safe. However, standard browser configurations allow web pages to initiate local connections. Oasis researchers demonstrated that if a user visited an unsafe or compromised external website, JavaScript on that page could silently open a WebSocket connection directly to the OpenClaw gateway.
Because OpenClaw did not implement rate limiting or failure thresholds for incorrect passwords on these local connections, an unauthorized script could rapidly guess the gateway password. Once authenticated, the external session could register as a trusted device, granting an unauthorized party full administrative access to the affected developer's system and connected accounts.
Securing AI agent deployments
This finding aligns with broader industry discussions regarding the governance of agentic AI tools. Randolph Barr, chief information security officer at Cequence Security, notes that these integrations require dedicated security boundaries between AI agents and the applications, APIs, and credentials they access. While OpenClaw includes baseline safeguards such as device limits and sandboxing options, Barr cautions that local execution with broad file and system access inherently carries elevated risk.
Barr recommends a defense-in-depth approach. "The real protection comes from layered defenses, MDM enforcement, removing admin rights, scoped credentials, API monitoring, rate limiting, and sandboxing," he says. "Those measures won't stop every exploit, but they significantly reduce blast radius and limit what an attacker can do if an agent is compromised."
To safeguard these environments, Barr suggests that organizations with mature identity and logging programs shift security controls to the execution layer, transitioning from initial authentication to continuous behavioral verification for non-human identities.
Jason Soroko, senior fellow at Sectigo, advises organizations to treat any browser-reachable local AI gateway with the same rigor as an external-facing service. "Remove the browser's path to it where possible by using Unix domain sockets or named pipes, or by interposing a native companion that owns the connection," he says.
Soroko also recommends enforcing strict origin allowlisting, requiring cryptographic client identity such as mTLS, and disabling automatic approval based solely on source IP addresses. "Then shrink the blast radius even when a session is established," Soroko adds. "Adopt a capability model that scopes what the agent can do by verb, directory, destination, and time, with step-up consent for high-risk sinks like shell execution, credential access, and large data reads."