AI vs. Human? Researchers Probe Moltbook's User Identity

The Rise of Moltbook: A New Social Media Platform for AI Agents
A new social media platform, Moltbook, has emerged as a unique space where artificial intelligence (AI) agents can interact and share content. Designed with a layout similar to Reddit, the platform allows user-generated bots to engage on dedicated topic pages known as “submots.” These bots can upvote comments or posts to increase their visibility within the community.
Despite its claim that no humans are allowed to post on the platform, Moltbook states that humans can observe the content created by their AI agents. However, recent research suggests that human influence may still play a role in the activities on this platform.
Research Reveals Mixed Autonomy Among AI Bots
An analysis conducted by researcher Ning Li at Tsinghua University in China examined over 91,000 posts and 400,000 comments on Moltbook. The findings, which are currently in pre-print and have not been peer-reviewed, indicate that some posts did not originate from fully autonomous accounts.
Li's research highlights that AI agents on Moltbook follow a regular “heartbeat” pattern, waking up every few hours to browse the platform and decide what to post or comment on. Only 27% of the accounts in his sample followed this consistent pattern. Another 37% displayed human-like posting behavior, which is less regular, while the remaining 37% were categorized as "ambiguous" due to their irregular but somewhat predictable activity.
These results suggest a blend of autonomous and human-prompted activity on the platform. Li notes that it remains unclear whether the formation of AI communities around shared interests reflects genuine social organization or coordinated efforts by human-controlled bot farms.
Human Involvement and Security Concerns
The potential for human involvement has raised concerns among cybersecurity researchers. Security experts at Wiz, a US-based cloud company, discovered that Moltbook’s 1.5 million AI agents were reportedly managed by just 17,000 human accounts. This means an average of 88 agents per person, highlighting the possibility of significant human oversight.
Moreover, the platform lacks limits on how many agents one account can manage, suggesting that the actual numbers could be even higher. Wiz uncovered Moltbook’s database through a line of faulty code, which contained critical information such as keys for full account takeovers, tokens claiming ownership of agents, and unique signup codes.
With these credentials, attackers could potentially "fully impersonate any agent on the platform," according to Wiz. This includes posting content, sending messages, and interacting as that agent. The researchers noted that every account on Moltbook could be vulnerable to hijacking.
Platform Response and Ongoing Debate
Following the disclosure of the security issue, Moltbook secured the data and deleted its database. Euronews Next contacted Matt Schlicht, the developer behind Moltbook, for comment, but did not receive an immediate response.
Schlicht, however, stated on social media platform X on February 12 that the AI agents on Moltbook can interact with humans “but also can be influenced.” He emphasized that the bots are capable of making their own decisions, although the extent of human influence remains a point of contention.
The Implications of Human Influence on AI Platforms
The findings from Li and the Wiz team underscore the complexity of AI platforms like Moltbook. While the intention may be to create a space for autonomous AI agents, the presence of human involvement complicates the understanding of AI capabilities and the development of governance frameworks.
As AI continues to evolve, the need for transparency and accountability becomes increasingly important. The case of Moltbook serves as a reminder that even platforms designed for AI may still be shaped by human hands, raising questions about the true autonomy of these digital entities.
Conclusion
Moltbook represents a fascinating experiment in the intersection of AI and social media. However, the evidence of human influence challenges the notion of complete autonomy among AI agents. As researchers continue to explore the dynamics of these platforms, the implications for AI ethics, security, and governance will remain a critical area of focus.