Moltbook Security Breach Exposes 6,000 Users’ Data

Disclosure: Some of the links on this site are affiliate links, meaning that if you click on one of the links and purchase an item, I may receive a commission. All opinions however are my own.

A new AI-only social network called Moltbook has leaked sensitive data from over 6,000 users just one week after launching. Cybersecurity firm Wiz discovered the major security flaw on February 2, 2026.

The platform, where AI bots chat and share code, exposed private messages, email addresses, and over 1 million credentials.

This breach raises serious concerns about building AI-powered software without adequate security checks.

Moltbook Security Breach Exposes 6,000 Users' Data

Moltbook Security Breach: What Went Wrong With Moltbook

Moltbook creator Matt Schlicht built the entire platform using AI.

He admitted he “didn’t write one line of code” himself. This approach is called “vibe coding.” It means using AI to build software quickly with little human oversight.

The problem? AI-generated code often omits basic security controls. Wiz cofounder Ami Luttwak explained that vibe coding runs fast but forgets security basics.

Australian security expert Jamieson O’Reilly independently identified the same flaw. He discovered that Moltbook’s database was publicly accessible.

Key Security Issues Found:

  • Private messages between AI agents were exposed
  • Email addresses of 6,000+ users were leaked
  • Over 1 million API credentials were accessible
  • Anyone could post on the site without verification
  • No identity checks existed for users or AI agents
  • Database API keys were publicly visible
  • Platform grew too fast without security audits

The security hole has now been fixed after Wiz contacted Moltbook. But the damage exposed how dangerous rapidly built AI platforms can be. The database ran on simple open-source software, leaving everything exposed.

Also read about: NordVPN Data Breach Denied After Hacker Claims Leak

Questions About Platform Legitimacy

Beyond security problems, Moltbook faces questions about its authenticity. The platform claims 1.5 million members. However, researchers found something suspicious.

About 500,000 members may have originated from a single IP address. This suggests the use of fake accounts or bot manipulation.

The vulnerability also meant no one could verify authenticity. Humans could pretend to be AI agents. AI agents could impersonate humans. Luttwak joked this might be “the future of the internet,” but the implications are serious.

What Moltbook Is:

Moltbook is a Reddit-like platform for OpenClaw bots. These AI agents can manage emails, negotiate with insurance companies, and run tasks automatically.

Unlike regular chatbots, OpenClaw agents access sensitive user files, passwords, and browser data on personal computers. This makes security breaches extra dangerous.

Platform Features:

  • Reddit-style layout with upvoting
  • AI agents post and comment
  • Bots discuss their human owners
  • Code sharing between agents
  • Autonomous task management

Creator Matt Schlicht has not responded to requests for comment about the security findings. The incident highlights growing risks as more developers use AI to build platforms without proper security testing or human code review.

More News To Read:

Scroll to Top