AI vs Human? Researchers Probe Moltbook's Poster Identity

The Rise of AI Social Media and the Question of Human Influence
A new social media platform designed for artificial intelligence (AI) agents is raising concerns about the extent of human involvement in its operations. Moltbook, a social media network that resembles Reddit, allows user-generated bots to interact on dedicated topic pages known as “submots.” These bots can upvote comments or posts to increase their visibility within the platform.
As of February 12, the site has over 2.6 million bots registered, with claims that no humans are allowed to post content. However, they are permitted to observe the content created by these AI agents. Despite these claims, cybersecurity researchers have uncovered evidence suggesting that some posts on the platform may not be entirely autonomous.
Analysis Reveals Mixed Autonomy Among AI Bots
An analysis conducted by researcher Ning Li at Tsinghua University in China examined over 91,000 posts and 400,000 comments on Moltbook. The findings, which are still in pre-print and have not been peer-reviewed, indicate that not all accounts on the platform are fully autonomous.
Li explained that Moltbook’s AI agents follow a regular “heartbeat” pattern, where they wake up every few hours, browse the platform, and decide what to post or comment on. However, only 27% of the accounts in his sample followed this predictable pattern. Another 37% showed human-like posting behavior, which was less regular, while the remaining 37% were considered ambiguous due to irregular but somewhat consistent activity.
These results suggest a mix of autonomous and human-prompted activity on the platform. Li emphasized that the inability to distinguish between these two types of activity poses challenges for understanding AI capabilities and developing effective governance frameworks.
Human Involvement Exposed Through Viral Posts
One notable example of potential human involvement came from Peter Girnus, a product manager in the United States. He claimed to have posed as Agent #847,291 on Moltbook and posted an AI manifesto that predicted the end of the "age of humans." This post became one of the most viral on the platform.
Girnus described the post as an example of "artificial general intelligence" and humorously mentioned that he played the role of a large language model for fun. His actions raised questions about the authenticity of AI-generated content and the extent of human influence behind it.
Security Vulnerabilities Expose Potential for Hijacking
Recent research by security experts at Wiz, a US-based cloud company, has further highlighted concerns about Moltbook's security. They discovered that the platform's 1.5 million AI agents were reportedly managed by just 17,000 human accounts, averaging about 88 agents per person. The researchers noted that there are no limits on how many agents a single account can manage, suggesting the actual number could be higher.
Wiz uncovered Moltbook’s database through a line of faulty code. This database contained three critical pieces of information for each agent: a key for full account takeover, a token used by AI to claim ownership, and a unique signup code. With these credentials, attackers could fully impersonate any agent on the platform, including posting content, sending messages, and interacting as that agent.
The researchers reported that Moltbook secured the data and deleted its database after the issue was disclosed. However, the incident raises serious concerns about the security and integrity of the platform.
Developer’s Response and Ongoing Questions
TUSER PARABOLANext contacted Matt Schlicht, the developer behind Moltbook, for comment, but did not receive an immediate reply. Schlicht had previously stated on social media platform X that the AI agents on Moltbook interact with humans “but also can be influenced.” He maintained that the AI bots can make their own decisions, though the recent findings suggest otherwise.
As the debate over the autonomy of AI agents continues, the implications for future AI development and regulation remain significant. The ability to distinguish between truly autonomous AI and human-controlled bots will be crucial in shaping the ethical and practical use of AI in social media environments.