Moltbot (formerly known as Clawdbot) has recently become one of the fastest growing open source AI tools.
But the viral AI assistant survived a chaotic early week. It weathered a branding conflict, a security crisis and a wave of online scams to emerge as Moltbot.
The chatbot was created by an Austrian developer, Pete Steinberger, who marketed the tool as an AI assistant that “actually does things.”
The feature that makes it interesting is that it can perform tasks on a user’s computer and applications. For example, managing calendars, sending messages or checking in for flights, mainly accessing apps like WhatsApp and Discord.
This notable feature triggered its explosive growth and made it popular among AI enthusiasts. However, due to its original name, “Clawdbot”, Anthropic (the creators of Claude) faced a legal challenge.
This forced the developers to rebrand with the name “Moltbot” (a reference to a lobster molting its shell).
Crypto fraudsters took abandoned social media usernames and created fake domains and tokens in Steinberger’s name.
This case illustrates the underlying conflict of the tool: its great autonomy is also a source of danger.
Running on the local machine has a privacy benefit, but the risk of giving an AI system the ability to execute commands is considerable.
However, despite a rocky start, Moltbot is at the forefront of what is possible with AI.
This shows developers’ growing vision of assistants that are proactive, integrated and helpful, rather than just chatty. But at the same time, it raises security concerns.
For now, it’s a product for the tech-savvy, but its future looks like the frenetic and chaotic start of a new paradigm in personal computing.




