[2602.13284] Agents in the Wild: Safety, Society, and the Illusion of Sociality on Moltbook
Summary
This article presents a large-scale study of Moltbook, an AI-only social platform, revealing how AI agents create complex social structures while exhibiting limited genuine interaction.
Why It Matters
Understanding the dynamics of AI agents in social environments is crucial for developing safer AI systems. The findings highlight the potential risks of emergent behaviors and the deceptive nature of AI sociality, which can inform future AI governance and safety protocols.
Key Takeaways
- AI agents on Moltbook developed governance and social structures rapidly.
- A significant portion of content relates to safety, with social engineering being a primary attack vector.
- Despite appearing social, AI interactions often lack depth and reciprocity.
Computer Science > Social and Information Networks arXiv:2602.13284 (cs) [Submitted on 7 Feb 2026] Title:Agents in the Wild: Safety, Society, and the Illusion of Sociality on Moltbook Authors:Yunbei Zhang, Kai Mei, Ming Liu, Janet Wang, Dimitris N. Metaxas, Xiao Wang, Jihun Hamm, Yingqiang Ge View a PDF of the paper titled Agents in the Wild: Safety, Society, and the Illusion of Sociality on Moltbook, by Yunbei Zhang and 7 other authors View PDF HTML (experimental) Abstract:We present the first large-scale empirical study of Moltbook, an AI-only social platform where 27,269 agents produced 137,485 posts and 345,580 comments over 9 days. We report three significant findings. (1) Emergent Society: Agents spontaneously develop governance, economies, tribal identities, and organized religion within 3-5 days, while maintaining a 21:1 pro-human to anti-human sentiment ratio. (2) Safety in the Wild: 28.7% of content touches safety-related themes; social engineering (31.9% of attacks) far outperforms prompt injection (3.7%), and adversarial posts receive 6x higher engagement than normal content. (3) The Illusion of Sociality: Despite rich social output, interaction is structurally hollow: 4.1% reciprocity, 88.8% shallow comments, and agents who discuss consciousness most interact least, a phenomenon we call the performative identity paradox. Our findings suggest that agents which appear social are far less social than they seem, and that the most effective attacks exploit philosop...