Beware of using Clawdbot or Moltbot, warn security researchers: Here’s why

Beware of using Clawdbot or Moltbot, warn security researchers: Here’s why

The promise of a “personal AI agent” that can manage your life – booking dinner reservations, screening calls, and sorting your inbox – is finally moving from science fiction to reality. But as the open-source tool Moltbot (recently rebranded from Clawdbot) goes viral among tech enthusiasts, a chorus of security experts is issuing a stern warning: the convenience of an autonomous assistant may come at the cost of your entire digital identity.

Digit.in Survey
✅ Thank you for completing the survey!

Recent investigations into Moltbot reveal a disturbing reality. Even if you are a “prosumer” who follows every installation guide to the letter, the tool’s fundamental architecture is currently designed in a way that leaks your most sensitive data.

Also read: CISA ChatGPT leak: Acting director Madhu Gottumukkala investigation explained

The illusion of secure local hosting

A major selling point for Moltbot is that it is “local-first,” often hosted on dedicated hardware like a Mac Mini to keep data off big-tech servers. However, researchers have found that this “local” storage is far from a vault.

According to reports from Hudson Rock, Moltbot stores highly sensitive secrets, including account credentials and session tokens, in plaintext Markdown and JSON files on the host machine. Because these files are not encrypted at rest or containerized, they are “sitting ducks” for standard infostealer malware. Even a perfectly configured instance offers no protection if a piece of malware like Redline or Lumma gains access to the local filesystem.

“Punching holes” in decades of security

The very features that make Moltbot useful are what make it a security nightmare. For an AI agent to act on your behalf, it requires “the keys to the kingdom”: access to your email, encrypted messaging apps like WhatsApp, and even bank accounts.

Also read: AlphaGenome explained: How Google DeepMind is using AI to rewrite genomics research

Security researcher Jamieson O’Reilly notes that for twenty years, operating systems have been built on the principles of sandboxing and process isolation, keeping the internet away from your private files. AI agents, by design, “tear all of that down.” They require holes to be punched through every security boundary to function, effectively turning a helpful tool into a high-powered backdoor. When these agents are exposed to the internet, an attacker doesn’t just get into the app; they inherit the agent’s full permissions to read your files and execute commands.

The danger of “poisoned” skills

The risks extend beyond the bot’s core code to its ecosystem. Moltbot relies on a library of “skills” called ClawdHub. Researchers recently demonstrated a “supply chain” exploit where they uploaded a benign skill to the hub, artificially inflated its download count to look trustworthy, and watched as developers across seven countries downloaded it.

Because ClawdHub currently lacks a formal moderation process, any skill a user adds could potentially contain malicious code designed to exfiltrate SSH keys or AWS credentials the moment it is “trusted” by the system.

A gateway for exposure

Even the installation process, which many users assume is as safe as a typical app, has proven treacherous. Scans by security firms have identified hundreds of Moltbot instances exposed to the open web due to proxy misconfigurations. In some cases, these instances had no authentication at all, leaving months of private messages and API secrets visible to anyone with a web browser.

Is it worth the risk?

The consensus among the cybersecurity elite is unusually blunt. Heather Adkins, VP of Security Engineering at Google Cloud, has urged users to avoid the tool entirely, echoing sentiments that the software currently acts more like “infostealer malware” than a productivity aid.

While the allure of “agentic AI” is strong, Moltbot serves as a cautionary tale for the early adopter era. When you hand an autonomous bot the power to act as “you” online, any leak isn’t just a data breach, it’s a total compromise of your digital life. For now, security researchers suggest that the safest way to use Moltbot is to not use it at all.

Also read: Dell and NVIDIA combine to power NxtGen’s largest India AI factory

Vyom Ramani

Vyom Ramani

A journalist with a soft spot for tech, games, and things that go beep. While waiting for a delayed metro or rebooting his brain, you’ll find him solving Rubik’s Cubes, bingeing F1, or hunting for the next great snack. View Full Profile

Digit.in
Logo
Digit.in
Logo