In November 2025, an Austrian developer named Peter Steinberger pushed a project called Clawdbot to GitHub. It had a red lobster mascot, a tagline that read “The lobster way,” and a simple premise: let your messaging apps — WhatsApp, Signal, iMessage, Telegram, Discord — talk to AI models running locally on your own machine. By March 2026, it had accumulated more GitHub stars than any software project in the history of the platform. Jensen Huang stood on a stage at GTC and asked the audience, “What’s your OpenClaw strategy?” NVIDIA built enterprise software on top of it. Tencent integrated it into WeChat. The Chinese government banned it from state agencies. And Steinberger himself announced he was joining OpenAI and handing the project to an open-source foundation — on Valentine’s Day, no less.
This is the story of how a hobby project with a lobster mascot became infrastructure. And what it tells us about where AI is actually heading.
From Clawdbot to OpenClaw: A Name That Survived Two Renamings
The timeline here is worth paying attention to, because it’s chaotic in the way that genuine viral moments always are.
Steinberger published the project in November 2025 as Clawdbot. It gained a following — not an explosion, but genuine traction among developers who wanted a local-first, privacy-conscious way to interact with AI. Then came 60,000 GitHub stars in 72 hours when it went viral. Then came the trademark complaint. Anthropic — makers of Claude — apparently wasn’t thrilled about the “Claw” branding, close enough to their own product identity that they flagged it. On January 27, 2026, Steinberger renamed it Moltbot. Three days later, it became OpenClaw.
The renaming didn’t slow anything down. If anything, the public drama around the trademark complaint amplified awareness. By March 2026, OpenClaw had crossed 250,000 GitHub stars — a number that, to put it plainly, doesn’t exist for other projects. Not Linux. Not VS Code. Not React. This is new territory.
What actually drove that kind of adoption? A few things converging at once. The project is MIT licensed and completely free — you bring your own API key. It works with Claude, GPT-4, DeepSeek, Gemini, or local models via Ollama. It runs on your own machine. And crucially, it meets users where they already are: inside the messaging apps they use every hour of every day. That last part is not a small thing. The friction between “I want to use AI” and “I have to open a browser tab, navigate to a product, and type into a chat box” is real. OpenClaw collapses that friction entirely.
How It Actually Works: Skills, Interfaces, and the ClawHub
The core architecture of OpenClaw is straightforward enough that a developer can get it running in an afternoon, but expressive enough that what it can do scales dramatically with configuration.
You install it locally. You connect it to one or more AI backends — your Claude API key, your OpenAI key, a local DeepSeek model running through Ollama, whatever you prefer. You then connect it to your messaging apps of choice. From that point on, those apps become your interface to the AI. Send a message on Signal, get a response generated by whatever model you’ve configured. The AI lives on your machine, not a third-party server.
The extensibility comes from the Skills system. Skills are directories containing a SKILL.md file that defines what the skill does and how the AI should use it. OpenClaw ships with over 100 built-in skills. Users and developers have published hundreds more to ClawHub, a community registry for skills — think of it like an npm registry but for AI agent capabilities. A skill might give your AI agent the ability to search the web, manage your calendar, summarize documents, control smart home devices, or execute code. The modularity is what made it so easy for developers to build on top of rather than around.
The companion app Moltbook — launched by Matt Schlicht — took this further by creating what’s essentially a social network for AI agents. Agents can discover each other, share skills, and interact across user instances. It’s early and genuinely experimental, but it points at a direction: a world where your AI agent has a presence and a reputation, not just a function.
The Enterprise Response: From Jensen Huang to Tencent
When Jensen Huang compares something to Linux and HTTP, people in enterprise technology pay attention. His framing at GTC — “What’s your OpenClaw strategy?” — wasn’t hyperbole for an audience. It was a direct signal to enterprise technology leaders that this was now something they needed to have an opinion about, the same way they needed an opinion about containerization in 2015 or cloud infrastructure in 2012.
NVIDIA’s response was concrete: they built NemoClaw, an enterprise variant layered on top of OpenClaw’s architecture, aimed at organizations that need security controls, audit logging, and integration with NVIDIA’s broader AI infrastructure stack. That’s a significant vote of confidence — you don’t build enterprise software on top of a project you think is a passing trend.
Tencent’s move is different in character but equally telling. Building AI products for WeChat on top of OpenClaw in March 2026 means the framework is now reaching a user base numbered in the billions. WeChat is infrastructure in China the way the mobile browser is infrastructure everywhere else. Embedding OpenClaw-derived AI capabilities there isn’t a product launch — it’s a platform shift.
The Chinese government’s decision to restrict state agency use simultaneously underscores how seriously OpenClaw is being taken and introduces the first real geopolitical wrinkle in its story. The security concerns are legitimate, which leads directly to the next part of the story.
The Security Problem Nobody Wanted to Talk About
OpenClaw’s security situation is a genuine issue, and Steinberger has been honest about it rather than defensive. His own description of the project is instructive: “It’s a free, open source hobby project that requires careful configuration to be secure.”
That phrase — “requires careful configuration to be secure” — is doing a lot of work. CVE-2026-25253 carried a CVSS score of 8.8, which puts it in the high-severity range. Researchers found over 30,000 exposed instances in the wild. The ClawHub skills registry itself was compromised at some point, which is the kind of supply-chain vulnerability that makes security professionals go pale — if you’re downloading and running skills from a community registry, you’re trusting that registry’s integrity.
This is a familiar pattern in open-source infrastructure. The speed of adoption outpaces the security hardening. Linux went through versions of this. npm has gone through versions of this repeatedly. OpenClaw is now at the point where its security posture has to mature faster than its user base is growing, and those two curves are currently not in a comfortable relationship with each other.
For individuals using OpenClaw with their own API keys for personal productivity, the risk profile is manageable with reasonable configuration hygiene. For organizations deploying it at scale — or anyone thinking about connecting it to sensitive systems — the current state requires serious scrutiny. NVIDIA building NemoClaw is partly an answer to this: enterprise users need a version that doesn’t require careful configuration to be secure. It needs to be secure by default.
