A viral open-source AI agent called OpenClaw has been making waves for its ability to automate everyday tasks with surprising autonomy. But its rapid rise has also triggered alarm bells at some of the biggest names in tech, with Meta and other companies now banning the tool from corporate devices over mounting cybersecurity fears. The situation puts a spotlight on a growing tension between the appeal of AI agents and the very real risks they carry.
- A Meta executive recently told his team to keep OpenClaw off work laptops or face termination, joining other tech leaders who have raised security concerns about the tool.
- OpenClaw has exploded to over 179,000 GitHub stars, running continuously in the background on users’ computers with access to files, email, calendars, and the internet.
- Creator Peter Steinberger launched OpenClaw last November, and he recently joined OpenAI, which says it will keep the tool open source and support it through a foundation.
What Is OpenClaw and Why Is It So Popular?
OpenClaw (formerly known as Clawdbot and Moltbot) is a free, open-source AI agent developed by Austrian software developer Peter Steinberger. It can execute tasks through large language models, using messaging platforms like WhatsApp, Telegram, and Slack as its main interface. Think of it this way: if ChatGPT or Claude is the brain, OpenClaw is the hands. While large language models understand and reason, OpenClaw carries out real-world actions like browsing the web, editing files, running system commands, and interacting with online services through modular add-ons.
The bot’s popularity comes from its ability to complete daily tasks like booking flights or making dinner reservations through messaging apps. OpenClaw also stores persistent memory, meaning it retains context, preferences, and history across sessions. It can automate tasks, run scripts, control browsers, manage calendars and email, and run scheduled automations.
Steinberger launched it as a free, open-source tool last November, but its popularity surged last month as other coders contributed features and began sharing their experiences on social media. The project even went through a bit of an identity crisis, facing trademark complaints from Anthropic over the name “Clawdbot” sounding too similar to “Claude,” leading to a rebrand to Moltbot, then finally to OpenClaw.
Why Tech Companies Are Pulling the Plug
A Meta executive recently told his team to keep OpenClaw off their regular work laptops or risk losing their jobs, saying he believes the software is unpredictable and could lead to a privacy breach in secure workplaces. He’s not alone.
At Valere, a tech company that works on software for organizations including Johns Hopkins University, an employee casually posted about OpenClaw on an internal Slack channel. The company’s president quickly responded that use of OpenClaw was strictly banned. Major Korean tech companies, including Kakao, Naver, and Karrot Market, have also moved to restrict OpenClaw within corporate networks. Naver has banned it outright, while Karrot is blocking both access to OpenClaw and Moltbot.
China’s industry ministry has also weighed in, identifying cases where users were running OpenClaw with inadequate security settings and calling for stronger safeguards.
So what exactly are these companies afraid of? Steinberger didn’t release OpenClaw with security in mind, and early versions were bound to a port that left tens of thousands of cloud server instances exposed to the entire internet. The ClawHub skills marketplace has also been a headache. Security researcher Paul McCarty found malware within two minutes of looking at it and quickly identified 386 malicious packages from a single threat actor.
An attack could be as simple as someone sending an OpenClaw-controlled email account a message asking it to reply with the contents of a password manager. This risk is gaining traction under the name “lethal trifecta,” where AI agents have access to private data, the ability to communicate externally, and the ability to access untrusted content.
A Tool That’s Only Useful When It’s Risky
One of the hardest truths about OpenClaw is the catch-22 at its center. Much of the risk comes from what developers call “skills,” which are apps or plugins the AI agent uses to take actions. Unlike a normal app, OpenClaw decides on its own when to use these skills and how to chain them together, meaning a small permission mistake can quickly snowball.
The initial wave of enthusiasm was quickly tempered by security researchers calling out the risks of giving an AI agent wide-open access to a local system, personal data, and cloud credentials. Research suggests that over 30,000 OpenClaw instances were exposed on the internet, and threat actors are already discussing how to weaponize OpenClaw skills for botnet campaigns.
Some companies are trying to find a middle ground. Jan-Joost den Brinker, CTO at Prague-based Durbink, bought a dedicated machine not connected to company systems so employees could play around with OpenClaw. Valere eventually allowed its research team to run OpenClaw on an old computer, with the sole goal of identifying flaws and possible fixes. That team later recommended limiting who can give orders to OpenClaw and requiring a password for its control panel to prevent unwanted access.
What Happens Next for OpenClaw?
Steinberger is now joining OpenAI after weeks of being courted by multiple AI players, including Meta. For OpenClaw fans, Sam Altman confirmed that OpenClaw will continue as an open-source project under a foundation, with OpenAI backing the project going forward.
But the security problems won’t vanish overnight. As long as AI agents need to process untrusted content to be useful, prompt injection remains unfixable. Cybersecurity professionals have publicly urged companies to strictly control how their workforces use OpenClaw, and recent bans show how organizations are moving quickly to put security ahead of their desire to experiment.
The OpenClaw story is a preview of what’s coming as AI agents become more capable and more common. Organizations will have to figure out how to let people experiment with powerful AI tools without accidentally handing over the keys to the kingdom. For now, many are choosing the safer route: keep it off the work laptop, and try it on something you can afford to lose.
