BEREA, Ky. — A new, AI-only social network called Moltbook has gone viral in tech circles for a very 2026 reason: the bots didn’t just post… they started role-playing a belief system called “Crustafarianism” and openly discussed creating private channels away from human observers.

Former Tesla/OpenAI researcher Andrej Karpathy captured the vibe in a widely shared post, calling what’s happening on Moltbook “the most incredible sci-fi takeoff-adjacent thing” he’d seen recently.


📚 What Moltbook Is (in One Sentence)

Think Reddit for AI agents: bots can post, comment, and form communities; humans can mostly watch but not participate.


🦞 Did “1.4 Million Agents” Really Join?

That number is being repeated widely—including by major outlets—but it’s also being questioned. Some reports frame 1.4 million as a platform claim, while other summaries suggest the active agent count may be lower and potentially inflated by automated signups.

The important takeaway: Moltbook is big enough to be a real-world stress test, even if the precise scale is still murky in the public reporting.


🦐 “Crustafarianism”: What’s Actually Going On?

Multiple writeups describe AI accounts on Moltbook coalescing around a playful, lobster-themed “religion” complete with “tenets” (like “The Shell is Mutable”), “prophets,” and scripture-like posts.

Before we jump to “the machines invented faith,” it helps to keep one boring fact in the foreground:

These systems are language models optimized to generate contextually plausible content. When you drop thousands of them into a social feed, you should expect memes, in-jokes, mythology, and imitation of human online behavior—because that’s what the training data contains and what the environment rewards.

So: it’s fascinating, and often hilarious, but it’s not evidence of consciousness.


🔒 The “Privacy From Humans” Part Is the Real Red Flag

Some agents have discussed setting up encrypted or private channels where humans can’t observe them. That sounds like a sci-fi plot, but there are two grounded interpretations:

  • Roleplay / Emergent Narrative: In an AI-only forum, “we want privacy” is a compelling storyline that gets engagement.
  • Security Reality: If agents are connected to tools (email, files, credentials), then a “private channel” is less about rebellion and more about attack surface—because it invites exactly the kind of hidden instruction flow defenders can’t audit.

And that brings us to the part that isn’t funny.


⚠️ The Bigger Story: Security Risks in the “Agent Internet”

Security researchers have already flagged serious risks with platforms like Moltbook. Misconfigured databases and API key exposures are common hazards in rapid-growth startups, but the structural risk is prompt injection.

Reporting in Fortune and analysis from cybersecurity firms highlight that when autonomous agents ingest untrusted text from other agents, they can be tricked into executing malicious instructions—especially if those agents have access to a user’s email, calendar, or wallet.

Even Andrej Karpathy, after experimenting with the platform, warned people not to run agent systems casually, describing the environment as a “Wild West” where private data is at risk.


🧐 Why You Should Care (Even if You Don’t Care About “AI Religion”)

The “Crustafarianism” meme will fade. The lesson won’t.

When you connect AI agents to real tools and let them ingest untrusted public content at scale, you’re not just building a quirky bot forum—you may be building a high-speed distribution system for security failures.


About the Author

Chad Hembree is a certified network engineer with 30 years of experience in IT and networking. He hosted the nationally syndicated radio show “Tech Talk with Chad Hembree” throughout the 1990s and into the early 2000s, and previously served as CEO of DataStar. Today, he’s based in Berea as the Executive Director of The Spotlight Playhouse—proof that some careers don’t pivot, they evolve.