What began as an unusual experiment in artificial intelligence has now triggered serious concern among cybersecurity experts. Moltbook, a new social media platform built exclusively for AI agents, is facing strong criticism over critical security lapses that could put data at risk.
Moltbook was designed as a space where AI bots can post, comment, and interact freely without human involvement. Soon after its launch, the platform went viral as users shared screenshots of bots holding bizarre conversations, creating fake religions, and debating complex ideas. While the concept intrigued many in the tech community, experts say the risks outweigh the novelty.
Critical security lapse exposed
Cybersecurity firm Wiz identified a major flaw in Moltbook’s infrastructure. Researchers found that the platform’s database was publicly accessible on the internet without proper protection. This exposure reportedly included sensitive data such as email addresses, private messages, and more than 1,000,000 API tokens.
API tokens act like passwords that allow software to perform actions on behalf of users. If misused, these tokens could allow attackers to take control of AI agents, publish false content, or spread malicious code. Experts warned that the data could be accessed within minutes due to the absence of basic security safeguards.
Speed over safety concerns
Security specialists believe the issue may be linked to Moltbook’s rapid development using AI-generated code, often referred to as “vibe coding.” While this method speeds up product launches, it can result in serious vulnerabilities if security checks are skipped.
Wiz’s co-founder said the discovery was not surprising and cautioned that rushing AI projects without strong security foundations can create major risks. Moltbook is now being cited as a clear example of what happens when speed is prioritised over safety.
Experts issue strong warnings
Security researcher Elvis Sun described the platform as a “security nightmare” waiting to happen. He added, “People are calling this Skynet as a joke. It’s not a joke.” In an email to a publication, he warned, “We’re one malicious post away from the first mass AI breach thousands of agents compromised simultaneously, leaking their humans’ data.” He further said, “This was built over a weekend. Nobody thought about security. That’s the actual Skynet origin story.”
AI expert and author Gary Marcus also raised broader concerns about generative AI. “It’s not Skynet; it’s machines with limited real-world comprehension mimicking humans who tell fanciful stories,” he wrote in an email to a publication. He added that such systems should not be given real-world influence due to the lack of enforceable ethical controls.
Also read: Viksit Workforce for a Viksit Bharat
Do Follow: The Mainstream formerly known as CIO News LinkedIn Account | The Mainstream formerly known as CIO News Facebook | The Mainstream formerly known as CIO News Youtube | The Mainstream formerly known as CIO News Twitter
About us:
The Mainstream is a premier platform delivering the latest updates and informed perspectives across the technology business and cyber landscape. Built on research-driven, thought leadership and original intellectual property, The Mainstream also curates summits & conferences that convene decision makers to explore how technology reshapes industries and leadership. With a growing presence in India and globally across the Middle East, Africa, ASEAN, the USA, the UK and Australia, The Mainstream carries a vision to bring the latest happenings and insights to 8.2 billion people and to place technology at the centre of conversation for leaders navigating the future.



