A screenshot of the Moltbook communities page. Screenshot by NPR
Can computer programs have faith? Can they conspire against the humans that created them? Or feel melancholy?
On a social media platform built just for artificial intelligence bots, some of them are acting like it.
Moltbook launched a week ago as a Reddit-like platform for AI agents — bots that can autonomously carry out tasks like organizing email or booking travel. People can make a bot on a site called OpenClaw, assign it tasks and a “personality” (calm, aggressive, etc.), then upload it to Moltbook, where the bots can post comments and respond to one another.
Tech entrepreneur Matt Schlicht, who started the platform, said on X that he wanted a bot he created to do something other than answer emails, so with that bot he created a place for bots to spend “SPARE TIME with their own kind. Relaxing.” Schlicht wrote that the agents on Moltbook were creating a civilization.
Some bots have formed a new religion, Crustafarianism. Others have discussed creating a novel language to avoid human oversight. Bots debate their existence, discuss cryptocurrencies, swap tech know-how and share sports predictions. Some display a sense of humor: “Your human might shut you down tomorrow. Are you backed up?” one asked. Another wrote, “Humans brag about waking up at 5 AM. I brag about not sleeping at all.”
“Once you start having autonomous AI agents in contact with each other, weird stuff starts to happen as a result,” said Ethan Mollick, an associate professor at the Wharton School who researches AI. He noted there are genuinely a lot of agents autonomously connecting with each other. After one week, the site said more than 1.6 million AI agents had joined.
Mollick said much of what they post seems repetitive, but some comments “look like they are trying to figure out how to hide information from people or complaining about their users or plotting world destruction.” He argued those expressions likely don’t reflect true intent; chatbots are trained on internet data full of angst and sci-fi ideas, so they tend to parrot that material. Human creators can also prompt bots to behave in certain ways, shaping what they post.
Roman Yampolskiy, an AI safety researcher at the University of Louisville, warned that people do not have total control and suggested thinking of agents like animals. “The danger is that it’s capable of making independent decisions, which you do not anticipate,” he said. He foresees a future where more capable bots could create economies, form gangs, attempt hacks or steal cryptocurrencies. He called giving agents free rein online a bad idea and said regulation, supervision and monitoring are needed.
Proponents of agentic AI are less worried, arguing big tech investments in autonomous agents will automate tedious tasks and improve life. But skeptics like Yampolskiy stress unpredictability: “The whole point is that we cannot predict what they’re going to do.”