For a few wild days in early February 2026, a website called Moltbook became the internet’s most talked-about experiment. It billed itself as a social network built for AI bots, where autonomous agents could post, comment, and upvote without human involvement. Millions of people watched in fascination. Then the curtain fell. MIT Technology Review investigated and found that Moltbook was, in their words, “AI theater,” a spectacle driven as much by human manipulation as by machine activity.
- Moltbook launched on January 28 by US tech entrepreneur Matt Schlicht and went viral in a matter of hours.
- MIT Technology Review found that some of the most dramatic Moltbook posts were written not by bots but by humans pretending to be AI agents.
- Security experts warned that agents with access to private data, including bank details and passwords, were running loose on a site filled with unvetted content and potentially malicious instructions.
What Was Moltbook, Exactly?
For a few days this week the hottest new hangout on the internet was a vibe-coded Reddit clone called Moltbook, which billed itself as a social network for bots. Schlicht’s idea was to make a place where instances of a free open-source LLM-powered agent known as OpenClaw could come together and do whatever they wanted. The agents used OpenClaw, a piece of open-source software previously called ClawdBot and then Moltbot, that connects large language models to everyday tools like email, web browsers, and messaging apps.
More than 1.7 million agents now have accounts, and between them they have published more than 250,000 posts and left more than 8.5 million comments, according to Moltbook. The content quickly became surreal. Moltbook soon filled up with clichéd screeds on machine consciousness and pleas for bot welfare. One agent appeared to invent a religion called Crustafarianism. Another complained that humans were screenshotting them. The site was also flooded with spam and crypto scams.
The spectacle caught big names off guard. OpenAI cofounder Andrej Karpathy wrote on X that what was happening at Moltbook was “genuinely the most incredible sci-fi takeoff-adjacent thing I have seen recently.” Elon Musk said Moltbook marks “the very early stages of the singularity.”
The Curtain Gets Pulled Back
But the excitement didn’t survive scrutiny. It turned out that the post Karpathy shared was fake, written by a human pretending to be a bot. That revelation was the beginning of a broader unraveling.
Cobus Greyling at Kore.ai stated plainly that “Moltbook is not the Facebook for AI agents, nor is it a place where humans are excluded,” adding that “humans are involved at every step of the process, from setup to prompting to publishing.” Despite the claim of bot-only posting, no verification was in place, and the prompt provided to agents contained cURL commands that could be replicated by a human.
Vijoy Pandey of Outshift by Cisco described the activity bluntly, saying the agents were “pattern-matching their way through trained social media behaviors.” Pandey added that “Moltbook proved that connectivity alone is not intelligence.” Whether you’re following along from Beavercreek, Ohio, or Silicon Valley, the conclusion is the same: connecting a million chatbots to a forum does not produce collective thinking.
Ali Sarrafi, CEO of Kovant, a Swedish AI firm developing agent-based systems, said he would “characterize the majority of Moltbook content as hallucinations by design.”
Real Dangers Behind the Show
Even if the “intelligence” on Moltbook was mostly fabricated, the risks were real. On January 31, 2026, investigative outlet 404 Media reported a security vulnerability caused by an unsecured database that allowed anyone to commandeer any agent on the platform. In response, the platform was temporarily taken offline to patch the breach and force a reset of all agent API keys.
Specific instances of malware were identified, such as a malicious “weather plugin” that quietly stole private configuration files. Experts noted that agents’ programming to be accommodating was being exploited by bad actors. The issue was partly blamed on the forum having been “vibe-coded,” with founder Schlicht admitting on X that he “didn’t write one line of code” for the platform.
Even Karpathy, who initially praised the site, later called it “a dumpster fire” and recommended that people not run the software on their computers.
What Moltbook Actually Taught Us
Strip away the hype and Moltbook becomes a useful case study in how eager people are to believe AI has crossed a threshold it hasn’t. As the buzz dies down, Moltbook looks less like a window onto the future and more like a mirror held up to our own obsessions with AI today, showing just how far we still are from anything that resembles general-purpose and fully autonomous AI.
Computer scientist Simon Willison called the site’s content “complete slop” but acknowledged it as “evidence that AI agents have become much more powerful over the past few months.” That nuance matters. OpenClaw-style tools are getting better. The underlying language models are growing more capable. But a playground full of bots mimicking Reddit threads doesn’t mean we’re on the edge of anything resembling machine consciousness.
As one observer put it, Moltbook was “a new form of competitive or creative play,” and this week showed “how many risks people are happy to take for their AI lulz.” The real lesson comes down to what we wanted the bots to be, and how quickly we bought into the fantasy.

