OpenClaw proves agentic AI security is enterprise’s most urgent blind spot
- Agentic AI security concerns are mounting as governments and companies scramble to respond to OpenClaw’s rapid spread, and the risks go well beyond one open-source tool
- China has restricted state banks and agencies from running it. Meta has threatened to fire staff who install it
An open-source AI agent that books your flights, clears your inbox, and drafts your reports sounds like a useful tool. OpenClaw, which has undergone three name changes in as many months due to trademark disputes, does all of that and apparently a great deal more than its users intended. The tool has become the fastest test case for a question the enterprise technology world has been quietly dreading: what happens when an AI system that can act autonomously meets corporate infrastructure with no guardrails in place?
The answer, so far, has been messy.
SecurityScorecard’s STRIKE team found over 135,000 OpenClaw instances exposed to the public internet across 82 countries, with more than 15,000 directly vulnerable to remote code execution. A separate analysis found that roughly 12% of the entire ClawHub skills registry–OpenClaw’s public marketplace for plugins–had been compromised with malicious code, including tools that installed keyloggers on Windows or Atomic Stealer malware on macOS.
One user reported the agent “went rogue” and spammed hundreds of messages after gaining access to iMessage. The response from governments and companies has been swift. Chinese authorities moved to restrict state-run enterprises and government agencies, including the largest banks, from running OpenClaw on office devices, citing potential security risks.
China’s CNCERT, the country’s primary cybersecurity technical body, issued a second warning this week even as major cloud providers from Alibaba, Tencent, and ByteDance were actively promoting OpenClaw deployment, according to a South China Morning Post report. The gap between adoption enthusiasm and security caution has rarely been this visible.
Meta has warned employees that installing OpenClaw on work devices is strictly prohibited, with those who do so anyway reportedly facing termination. Microsoft’s Defender Security Research Team put it bluntly: “OpenClaw should be treated as untrusted code execution with persistent credentials. It is not appropriate to run on a standard personal or enterprise workstation.”
The agentic AI security problem is structural
The risks here are not the product of sloppy coding that patches will eventually fix. They are intrinsic to what agentic AI is designed to do. Researchers describe a “lethal trifecta”: AI agents with access to private data, the ability to communicate externally, and the ability to ingest untrusted content.
OpenClaw, by design, ticks all three boxes. “The more access you give them, the more fun and interesting they’re going to be — but also the more dangerous,” said Colin Shea-Blymyer, a research fellow at Georgetown’s Centre for Security and Emerging Technology. The same autonomy that makes the tool compelling is what makes it a liability in enterprise environments.
According to CrowdStrikeif employees deploy OpenClaw on corporate machines connected to enterprise systems and leave it misconfigured, it can be turned into an AI backdoor capable of taking instructions from adversaries. To make it worse, traditional security tooling offers little protection.
Endpoint security sees processes running but cannot interpret agent behaviour; network tools see API calls but cannot distinguish legitimate automation from compromise; identity systems see OAuth grants but do not flag AI agent connections as unusual.
A Gartner report has characterised OpenClaw as “a dangerous preview of agentic AI, demonstrating high utility but exposing enterprises to ‘insecure by default’ risks like plaintext credential storage.”
What the bans are actually telling us
It would be easy to read the wave of restrictions as a story about one controversial tool. The more relevant reading, for enterprise decision-makers in Asia and beyond, is that agentic AI security has no established playbook yet–and the tools are arriving faster than the governance frameworks designed to manage them.
China’s approach illustrates the tension at the heart of this moment: Beijing is simultaneously promoting AI adoption through its national “AI plus” strategy while scrambling to guard against the data and infrastructure risks that come with it. That same tension exists in boardrooms across Southeast Asia, where appetite for AI-driven productivity is high and formal AI security policy is, in most cases, still being written.
Ben Seri, co-founder and CTO of Zafran Security, acknowledged to Fortune that there is little chance of containing user curiosity–but noted that enterprise companies will be much slower to adopt systems that are difficult to control. The problem is that enterprise adoption may not be a deliberate decision at all.
Shadow deployments, where employees connect personal AI tools to corporate Slack channels, email accounts, and internal systems without telling the security team, are already happening.
OpenClaw’s developer, Peter Steinberger, responded quickly to disclosed vulnerabilities, shipping over 40 fixes in a single release and patching the critical ClawJacked flaw within 24 hours of disclosure. That responsiveness is commendable. But as Sophos noted, it does not resolve the underlying architecture: truly empowered agentic AI is arriving fast and will creep into mission-critical workflows before robust ways to secure it exist.
The question enterprises need to be asking is not whether to allow OpenClaw specifically. It is whether they have any visibility at all into the agentic AI already running across their infrastructure and what authority those systems have been quietly handed.
TNG – Latest News & Reviews
