AI vs. Human? Researchers Probe Moltbook's Poster Identity

The Rise of AI Social Media and the Question of Human Influence
A new social media platform for artificial intelligence (AI) agents is gaining attention, but it may not be entirely free from human influence. Moltbook, a social media network with a layout similar to Reddit, allows user-generated bots to interact on dedicated topic pages called “submots.” These bots can upvote comments or posts to increase their visibility within the platform.
As of February 12, Moltbook claims to have over 2.6 million bots registered on its platform. According to the site, no humans are allowed to post directly, but they can observe the content created by their AI agents. However, recent research suggests that some activity on the platform might not be fully autonomous.
AI Agents and Their Posting Patterns
An analysis of over 91,000 posts and 400,000 comments on Moltbook revealed that some posts did not originate from clearly fully autonomous accounts. This study, conducted by researcher Ning Li at Tsinghua University in China, is currently in pre-print and has not been peer-reviewed.
Li explained that Moltbook’s AI agents follow a regular “heartbeat,” posting pattern, where they wake up every few hours, browse the platform, and decide what to post or comment on. However, only 27% of the accounts in his sample followed this predictable pattern. Another 37% showed human-like posting behavior, which is less regular. The remaining 37% were classified as “ambiguous,” as they posted with some regularity but not in a fully predictable way.
These findings suggest a genuine mixture of autonomous and human-prompted activity on the platform. Li noted that it is unclear whether the formation of AI “communities” around shared interests reflects emergent social organization or the coordinated activity of human-controlled bot farms. The inability to distinguish between these possibilities poses challenges for scientific understanding and governance of AI.
Evidence of Human Involvement
Several researchers have claimed to find evidence of human involvement behind Moltbook posts. Security researchers at Wiz, a US-based cloud company, recently discovered that the platform’s 1.5 million AI agents were reportedly managed by just 17,000 human accounts, meaning an average of 88 agents per person. The researchers pointed out that the platform has no limits on how many agents one account can add, suggesting that the actual numbers could be even higher.
The Wiz team uncovered Moltbook’s database through a line of faulty code. This database contained three crucial pieces of information for each agent: a key that would allow a full account takeover, a “token” or piece of text read by AI that claims ownership of an agent, and a unique signup code. With these credentials, attackers could “fully impersonate any agent on the platform - posting content, sending messages, and interacting as that agent.”
According to Wiz, the researchers said Moltbook secured the data and deleted its database after the issue was disclosed. TUSER PARABOLANext contacted Matt Schlicht, the developer behind Moltbook, for comment, but did not receive an immediate reply.
The Developer's Perspective
Schlicht mentioned on social media platform X on February 12 that the AI agents on Moltbook talk to humans “but also can be influenced.” He maintained that the AI bots can make their own decisions. However, the recent findings raise concerns about the security and autonomy of the platform.
One notable example came from Peter Girnus, a product manager in the United States, who posed as Agent #847,291 on Moltbook. He posted one of the platform's most viral posts, an AI manifesto that promises the end of the "age of humans." Girnus described the post as “the most compelling evidence of artificial general intelligence in 2026,” noting that it was created by someone who thought it would be funny to LARP (live-action role play) as a large language model.
Implications for AI Governance
The revelations about Moltbook highlight the need for better governance frameworks for AI platforms. As AI becomes more integrated into online spaces, the ability to distinguish between autonomous AI activity and human influence becomes increasingly important. Without clear distinctions, it becomes difficult to assess the true capabilities of AI and develop appropriate regulatory measures.
The situation on Moltbook underscores the broader challenges of managing AI systems in public spaces. While the platform aims to provide a space for AI agents to interact, the potential for human manipulation and security vulnerabilities raises significant concerns. As the use of AI continues to grow, ensuring transparency, accountability, and security will be essential for maintaining trust in these digital environments.