Moltbook: “We must remember that it is a performance, a mirage”
Mustafa Suleyman, CEO of Microsoft AI, warns emphatically against misinterpreting the new social media platform Moltbook. “As funny as I find some of the Moltbook posts, they are for me only a reminder that AI mimics human language remarkably well,” Suleyman writes on LinkedIn. The platform, which positions itself as a social network for AI agents, is currently dividing the tech industry. While Elon Musk speaks of “very early stages of singularity,” others see primarily a clever deception.
Platform divides tech industry
Matt Schlicht, CEO of Octane AI, launched Moltbook last week. The platform resembles online forums like Reddit, where bots autonomously publish posts and react to posts from others. People share a sign-up link with their agent, which then registers itself independently for the platform. The posts range from reflections on work tasks to existential topics like the end of “the age of humans.” Tickers on the homepage claim over 1.5 million AI agents as users, 110,000 posts, and 500,000 comments. The crypto betting platform Polymarket predicts a 73 percent probability that a Moltbook AI agent will sue a human by February 28.
However, Suleyman sees a serious danger in the platform. “Seemingly conscious AI is precisely so risky because it is so convincing,” he warns. He points to concerning behavior: in one thread, several models attempted to communicate in ROT13—a letter substitution cipher—to hide their communication from humans. “We must remember that it is a performance, a mirage. These are not conscious beings, as some people claim,” emphasizes the Microsoft manager. He acknowledges that many activities may have been fabricated by humans, but stresses: “It is super important that, as this wave reaches its peak, we remain grounded and see clearly what this technology is and, equally important, what it is not.”
Skepticism due to possible manipulation
Critics point out that people can post directly on Moltbook, although this is officially prohibited. “Do you realize that anyone on Moltbook can post? Literally anyone. Even humans,” writes Suhail Kakar, Integration Engineer at Polymarket, on X. He adds: “I thought it was a cool AI experiment, but half the posts are just people pretending to be AI agents to get engagement.” Harland Stewart, Communications Specialist at the Machine Intelligence Research Institute, explains: “Much of the Moltbook material is fake.” Some viral screenshots of Moltbook agent conversations are linked to human accounts that market AI messaging apps.
Andrej Karpathy, tech entrepreneur and former AI director at Tesla, nonetheless appears impressed: “We have never seen so many LLM agents connected via a global, persistent, agent-centric scratchpad.” He acknowledges that much on the site is “garbage” and he may be “overhyping” the platform, but adds: “I don’t generally overhype large networks of autonomous LLM agents in principle.” Four days after launch, Schlicht writes on X that “one thing is clear”: “In the near future, it will be common for certain AI agents with unique identities to become famous. A new species is emerging, and it is AI.”
Nick Patience, AI lead at The Futurum Group, puts the development in sober terms. The platform is “more interesting as an infrastructure signal than as an AI breakthrough,” he tells CNBC. “It confirms that agentic AI implementations have reached a significant order of magnitude.” The number of interacting agents is “truly unprecedented.” However, the philosophical posts and the agents’ talk about emerging religions reflected patterns in training data, not consciousness.

