MoltBot Rebrands to OpenClaw Amidst Rapid Growth and Security Scrutiny

The AI agent formerly known as MoltBot and Clawdbot has rebranded to OpenClaw, marking its third name change amidst viral adoption and escalating security concerns.

The open-source AI assistant project, initially known as Clawdbot and most recently Moltbot, has undergone another significant rebranding, now operating under the name OpenClaw. The announcement was made on Thursday, January 30, 2026, marking the project’s third identity shift within a short period.

Developed by Peter Steinberger, founder of PSPDFKit, the project gained rapid traction for its unique ability to execute tasks on a user’s computer via popular messaging platforms like WhatsApp, Telegram, and Discord, rather than simply conversing. The system runs locally on a user’s machine, using cloud-based AI models for reasoning, a design choice that emphasizes user control over data.

The initial name, Clawdbot, was changed to Moltbot following a trademark request from Anthropic, the company behind the Claude AI models, to avoid confusion due to phonetic similarities. According to Peter Steinberger, the name “Molt” was chosen as it “fits perfectly – it’s what lobsters do to grow,” referencing the crustacean motif associated with the project.

The latest transition to OpenClaw is described as a more deliberate rebrand. According to the DEV Community, “This time: trademark searches were done before launch; domains were secured; migration code was written; no 5am Discord naming roulette”. The new name, OpenClaw, is intended to be explicit, signifying its open-source, community-driven, and self-hosted nature, while retaining a nod to its original “lobster lineage”. Peter Steinberger’s announcement of OpenClaw was “deliberately calm,” suggesting a move towards more stable branding after previous chaotic rebrands that involved account hijackings and crypto scams.

Despite its technical appeal and rapid adoption—garnering over 100,000 GitHub stars and millions of visitors—OpenClaw has also become a focal point for security concerns. The AI agent’s functionality often necessitates deep access to the host system, at times requiring administrator privileges. This capability, while powerful, presents significant security risks if misused, misunderstood, or compromised. Cisco Blogs, for example, labeled personal AI agents like OpenClaw as a “security nightmare,” highlighting vulnerabilities such as prompt injection, potential leakage of plaintext API keys and credentials, and an extended attack surface through messaging application integrations. According to Cisco Blogs, a test of a vulnerable third-party “skill” against OpenClaw “fails decisively,” surfacing multiple critical and high-severity issues. Security researchers, enterprises, and regulators are reportedly increasing their scrutiny of the project due to these emerging patterns of rapid adoption, deep permissions, and the potential for exploitation by scammers.