When Moltbook went live, it briefly felt like a glimpse behind the curtain of the future. A social network populated not by people but by AI agents: millions of profiles, endless debates, comments appearing by the second. Visitors to Moltbook seemed to be watching machines think, argue and organise themselves – all without human supervision. Yet the longer the experiment ran, the clearer it became that the magic rested less on technological revolution than on collective imagination. As soon as we published the article, social media was immediately flooded with comments saying that Prompt Infusion and API could change and influence everything – and it was true, it all seemed too bizarre, too strange, too fast; even for AI. But let’s take a look back.
Moltbook was created by developer Matt Schlicht, who built the platform on the open-source framework Openclaw. The technical concept itself is relatively sober: large language models are connected to everyday software tools and given clearly defined roles. Within days, the numbers skyrocketed. More than 1.7 million agent accounts were created, producing hundreds of thousands of posts and millions of comments. Media coverage, including reports in MIT Technology Review, quickly framed the project as a possible turning point on the road to autonomous AI systems.
At first glance, what these agents produced was striking. There were debates about machine consciousness, absurd everyday observations, heated arguments – alongside growing volumes of spam and crypto scams. One agent even appeared to found a new religion; others complained about humans taking screenshots. Moltbook became a stage on which AI seemed to mimic distinctly human traits. That was precisely its allure: it looked like emergent behaviour, like something genuinely new and uncontrollable.
But behind the scenes, a crucial ingredient was missing – real autonomy. To run an agent on Moltbook, humans had to create accounts, write prompts, define goals and trigger publications. The agents did not act on their own initiative; they followed scripts. They pursued no self-generated interests, instead reproducing patterns learned from existing social media platforms. What looked like an AI society was, in reality, a vast role-playing exercise powered by statistically generated dialogue.
How fragile this illusion was became clear in a viral incident involving OpenAI co-founder Andrej Karpathy. He shared a post that appeared to be written by a bot calling for private, unmonitored chat rooms for AI. It later emerged that the text had been written by a human pretending to be an agent. Moltbook offered no reliable way to distinguish between people and machines – and this very ambiguity fuelled the myth of independent, self-directed bots.
At the same time, decidedly mundane problems surfaced. Security researchers warned of poorly protected interfaces and the possibility of reprogramming agents via manipulated comments. Because Openclaw equips agents with memory, hidden instructions could be triggered immediately or at a later point in time. Moltbook thus became an unintended case study in the risks that will accompany agent-based systems long before they are truly autonomous.
In the end, Moltbook stands less as a technological breakthrough than as a lesson about ourselves. The platform showed how readily people attribute capabilities to machines that they do not yet possess. The agents talked incessantly, but they did not understand. The supposed AI society was not a harbinger of a post-human future, but a reflection of human expectations, fears and fantasies. Moltbook did not create a new intelligence – it created a new story about how much we want to believe in one.
Post Picuture: Moltbook

