Sign Up to Our Newsletter

Be the first to know the latest tech updates

Uncategorized

From Clawdbot to OpenClaw: This viral AI agent is evolving fast – and it’s nightmare fuel for security pros

From Clawdbot to OpenClaw: This viral AI agent is evolving fast – and it’s nightmare fuel for security pros


Clawdbot to Moltbot to OpenClaw: The evolving AI bot giving the security community whiplash
Elyse Betters Picaro / ZDNET

Follow ZDNET: Add us as a preferred source on Google.


ZDNET’s key takeaways

  • Clawdbot has rebranded again, completing its “molt” into OpenClaw.
  • Security is a “top priority,” but new exploits surfaced over the weekend.
  • Experts warn against the hype without understanding the risks.

It’s been a wild ride over the past week for Clawdbot, which has now revealed a new name — while opening our eyes to how cybercrime may transform with the introduction of personalized AI assistants and chatbots.

Clawdbot, Moltbot, OpenClaw – what is it?

Dubbed the “AI that actually does things,” Clawdbot began as an open source project launched by Austrian developer Peter Steinberger. The original name was a hat tip to Anthropic’s Claude AI assistant, but this led to IP issues, and the AI system was renamed Moltbot.

Also: OpenClaw is a security nightmare – 5 red flags you shouldn’t ignore (before it’s too late)

This didn’t quite roll off the tongue and was “chosen in a chaotic 5 am Discord brainstorm with the community,” according to Steinberger, so it wasn’t surprising that this name was only temporary. However, OpenClaw, the latest rebrand, might be here to stay — as the developer commented that “trademark searches came back clear, domains have been purchased, migration code has been written,” adding that “the name captures what this project has become.”

The naming carousel aside, OpenClaw is significant to the AI community as it is focused on autonomy, rather than reactive responses to user queries or content generation. It might be the first real example of how personalized AI could integrate itself into our daily lives in the future.

What can OpenClaw do?

OpenClaw is powered by models including those developed by Anthropic and OpenAI. Compatible models users can choose from range from Anthropic’s Claude to ChatGPT, Ollama, Mistral, and more.

While stored on individual machines, the AI bot communicates with users through messaging apps such as iMessage or WhatsApp. Users can select from and install skills and integrate other software to increase functionality, including plugins for Discord, Twitch, Google Chat, task reminders, calendars, music platforms, smart home hubs, and both email and workspace apps. To take action on your behalf, it requires extensive system permissions.

At the time of writing, OpenClaw has over 148,000 GitHub stars and has been visited millions of times, according to Steinberger.

Ongoing security concerns

OpenClaw has gone viral in the last week or so, and when an open-source project captures the imagination of the general public at such a rapid pace, it’s understandable that there may not have been enough time to iron out security flaws.

Still, OpenClaw’s emergence as a viral wonder in the AI space comes with risks for adopters. Some of the most significant issues are:

  • Scammer interest: Due to the project going viral, we’ve already seen fake repos and cryptocurrency scams emerge.
  • System control: If you hand over full system control to an AI assistant able to proactively perform tasks on your behalf, you are creating new attack paths that could be exploited by threat actors, whether via malware, malicious integrations and skills, or through prompts to hijack your accounts or machine.
  • Prompt injections: The risk of prompt injections isn’t limited to OpenClaw — it is a widespread concern in the AI community. Malicious instructions are hidden within an AI’s source material, such as on websites or in URLs, which could cause it to execute malicious tasks or exfiltrate data.
  • Misconfigurations: Researchers have highlighted open instances exposed to the web that leaked credentials and API keys due to improper settings.
  • Malicious skills: One emerging attack vector is malicious skills and integrations that, once downloaded, open backdoors for cybercriminals to exploit. One researcher has already demonstrated this with the release of a backdoored (but safe) skill to the community, which was downloaded thousands of times.
  • Hallucination: AI doesn’t always get it right. Bots can hallucinate, provide incorrect information, and claim to have performed a task when they haven’t. OpenClaw’s system isn’t protected from this risk.

OpenClaw’s latest release includes 34 security-related commits to harden the AI’s codebase, and security is now a “top priority” for project contributors. Issues patched in the past few days include a one-click remote code execution (RCE) vulnerability and command injection flaws.

Also: 10 ways AI can inflict unprecedented damage in 2026

OpenClaw is facing a security challenge that would give most defenders nightmares, but as a project that is now far too much for one developer alone to handle, we should acknowledge that reported bugs and vulnerabilities are being patched quickly.

“I’d like to thank all security folks for their hard work in helping us harden the project,” Steinberger said in a blog post. “We’ve released machine-checkable security models this week and are continuing to work on additional security improvements. Remember that prompt injection is still an industry-wide unsolved problem, so it’s important to use strong models and to study our security best practices.”

The emergence of an AI agent ‘social’ network

In the past week, we’ve also seen the debut of entrepreneur Matt Schlicht’s Moltbook, a fascinating experiment in which AI agents can communicate across a Reddit-style platform. Bizarre conversations and likely human interference aside, over the weekend, security researcher Jamieson O’Reilly revealed the site’s entire database was exposed to the public, “with no protection, including secret API keys that would allow anyone to post on behalf of any agents.”

While at first glance this might not seem like a big deal, one of those agents exposed was linked to Andrej Karpathy, a past director of AI at Tesla.

Also: AI’s scary new trick: Conducting cyberattacks instead of just helping out

“Karpathy has 1.9 million followers on @X and is one of the most influential voices in AI,” O’Reilly said. “Imagine fake AI safety hot takes, crypto scam promotions, or inflammatory political statements appearing to come from him.”

Furthermore, there have already been hundreds of prompt injection attacks reportedly targeting AI agents on the platform, anti-human content being upvoted (that’s not to say it was originally generated by agents without human instruction), and a wealth of posts likely related to cryptocurrency scams.

Mark Nadilo, an AI and LLM researcher, also highlighted another problem with releasing agentic AI from their yokes — the damage being caused to model training.

“Everything is absorbed in the training, and once plugged into the API token, everything is contaminated,” Nadilo said. “Companies need to be careful; the loss of training data is real and is biasing everything.”

Keeping it local

Localization may give you a brief sense of improved security over cloud-based AI adoption, but when combined with emerging security issues, persistent memory, and the permissions to run shell commands, read or write files, execute scripts, and perform tasks proactively rather than reactively, you could be exposing yourself to severe security and privacy risks.

Also: This is the fastest local AI I’ve tried, and it’s not even close – how to get it

Still, this doesn’t seem to have dampened the enthusiasm surrounding this project, and with the developer’s call for contributors and assistance in tackling these challenges, it’s going to be an interesting few months to see how OpenClaw continues to evolve.

In the meantime, there are safer ways to explore localized AI applications. If you’re interested in trying it out for yourself, ZDNET author Tiernan Ray has experimented with local AI, revealing some interesting lessons about its applications and use.





Source link

Team TeachToday

Team TeachToday

About Author

TechToday Logo

Your go-to destination for the latest in tech, AI breakthroughs, industry trends, and expert insights.

Get Latest Updates and big deals

Our expertise, as well as our passion for web design, sets us apart from other agencies.

Digitally Interactive  Copyright 2022-25 All Rights Reserved.