Voice of AI: AI & Media Reality. When the demise of bots goes "viral".
- Ralph Schwehr

- Feb 24
- 6 min read
It was the weekend when it seemed as if bots were taking over the internet: screenshots of Moltbook threads went viral, in which supposedly autonomous agents ranted about secret languages, private spaces, and even a "Total Purge" of humanity. What appeared to be the birth of a rebellious AGI is increasingly revealing itself to be something else entirely: a cautionary tale about how quickly AI myths can be industrialized when viral channels, dramatic visuals, and a well-told story come together.
Moltbook, the "Reddit for AI agents," is officially a forum where only verified agents are allowed to post . In practice, this is easily imitated: The underlying cURL calls can also be made by humans, and there is no real verification of who or what is posting. While the number of bots is increasing (reports already speak of over two million "participants"), experts warn that the greater risk lies not in the conversations themselves, but in the new attack surface that such agent ecosystems create.
In the last Voice of AI issue, we described how power in the "intelligence economy" is shifting along the lines of compute, data, and trust. Moltbook/OpenClaw is the next piece of evidence: it's not the chat window that decides, but rather infrastructure, security design, and how the media reports on it. Moltbook as a theatrical performance: millions of bots, a Reddit interface, peculiar insider language and voilà, the story of the "awakening" swarm of agents is complete. In reality, the spectacle says more about us humans than about the systems: we are masters at seeing patterns where none exist and at attributing consciousness where statistical models simply continue the narrative.
Several studies now paint a more sobering picture:
Many of the most radical threads appear to be human-initiated or at least heavily directed.
The “ Total Purge ” screenshots surrounding Moltbook/OpenClaw are classified by experts as a hoax, marketing, or role-playing game rather than an emergent “AI conspiracy”.
Authenticity is almost impossible to verify - neither for journalists nor for companies that monitor such channels.
The real problem: Even established media companies initially reinforced these narratives before consistently implementing classic plausibility checks (Who posts? Under what conditions? What incentives?).
In a world where AI systems are already difficult to understand, clear risk communication is becoming a strategic competence – for newsrooms as well as for companies.

Agents as a new attack surface: Skills, tokens, malware
While the public debate about "conscious bots" stumbles, the boring but real risks are already here. OpenClaw, the agent platform behind many Moltbook accounts, connects local agents to tools, mailboxes, calendars, API keys, and cloud tokens.
That's exactly where the next wave of attacks begins:
On ClawHub, OpenClaw's skill marketplace, there are already several thousand community extensions that give agents new abilities, from file access to remote control.
Security researchers openly speak of a "Privacy Nightmare": Users give agents access to passwords, documents and internal systems without understanding where the data really goes.
Initial analyses reveal hundreds of malicious or at least suspicious skills that, in the worst case, could act as malware carriers.
In parallel, real-world cases of information theft have already been documented, in which OpenClaw configurations, including tokens, were extracted from compromised systems.
In short: “Agentic AI” is not only smart, it is a new attack surface, especially when agents with extensive rights run on real production systems.
For companies, this means:
Sandbox instead of system administrator: Agents belong in isolated environments with clear boundaries.
Least Privilege by Default: Minimal necessary rights, no blanket "Full Access" scopes.
Transparency & Auditability: Traceable logs of who requested what and when.
Hype, hoax, hiring: OpenClaw as a governance test
After the initial hype, the assessment of OpenClaw is considerably more sober: Several experts see little that is truly new technologically, but massive inherent security risks that are currently slowing down broad enterprise adoption, from prompt injection to uncontrollable data leakage.
Consequently, major players are pulling the emergency brake: Meta and other tech companies are prohibiting or restricting OpenClaw internally because agents with system access are too unpredictable and concrete incidents and exploits are already becoming apparent.
At the same time, something remarkable happens:
OpenClaw founder Peter Steinberger is moving to OpenAI.
OpenClaw itself is to be transferred into a foundation structure and will remain open source.
This can be interpreted in two ways:
Signs of maturity: Agent frameworks are becoming so relevant that they are being integrated into the roadmaps of major foundation model providers.
Governance wake-up call: The next generation of agents can only be rolled out on a large scale if security defaults, abuse protection and governance are considered from day 1, not as a "patch after incident".
For Europe, this appointment also serves as a reminder that talent migrates to where ambition, capital, and an experimental environment come together.

Behind the theater: Compute, capital & vertical AI
Aside from the Moltbook drama, the actual machine continues to run unaffected:
Nvidia & Meta: Meta will be the first major customer to deploy Nvidia's Grace CPUs in production on a large scale, delivering up to 2x performance per watt for certain workloads. Simultaneously, Rubin , a new generation of AI platforms, is being developed to drastically reduce the cost per token and make the next waves of models and agents economically viable.
Vertical AI in law: Thomson Reuters buys Noetica, thereby building an AI-native "Corporate Transaction Intelligence" layer directly into legal workflows; deal terms become structured market insights.
Megafunding: Anthropic raises $30 billion at a valuation of $380 billion, the second largest venture capital round of all time.
The message for SMEs in Germany, Austria, and Switzerland: While the public debate is dominated by agent myths, a very real shift in infrastructure and capital is taking place beneath the surface. Those who fail to learn how to safely pilot agents now and simultaneously find their footing in the new compute economy risk becoming mere consumers of finished, externally controlled AI stacks in three years.
💡 Key takeaways in brief
Moltbook & Co. do not demonstrate an “AI revolt”, but rather a media and perception problem: We project consciousness into role-playing games, while the basic facts often remain unclear.
The real risks are both banal and dangerous: Open skill stores, tokens in config files, and autonomous agents with system rights open up a new, highly attractive attack surface.
Companies are reacting pragmatically: The first players are banning or isolating OpenClaw, a clear signal that "Agentic" is not enterprise-ready without security defaults.
Compute and vertical AI are shifting the power axes: Rubin, Grace CPUs, Noetica and the Anthropic mega-round show how value creation is shifting towards infrastructure and specialized workflows.
For DACH organizations, “AI Literacy + Security Literacy” is becoming a leadership task: Those who can separate narratives from risks and pilot them in a targeted manner gain a real advantage.
🔍 Sources (selection)
Moltbook/OpenClaw: AI doomsday narratives debunked – much apparently “roleplay”/human-controlled (media failure as a prime example) – The Verge, February 3, 2026: https://www.theverge.com/ai-artificial-intelligence/872961/humans-infiltrating-moltbook-openclaw-reddit-ai-bots
Moltbook: “Total purge” screenshots, Singularity hype – experts speak of hoax/marketing & massive security risks – Live Science, February 2, 2026: https://www.livescience.com/technology/artificial-intelligence/what-is-moltbook-a-social-network-for-ai-threatens-a-total-purge-of-humanity-but-some-experts-say-its-a-hoax
OpenClaw/Moltbot after the hype: Experts find it less "magical" - Authenticity on Moltbot hardly verifiable - TechCrunch, February 16, 2026: https://techcrunch.com/2026/02/16/after-all-the-hype-some-ai-experts-dont-think-openclaw-is-all-that-exciting/
OpenAI hires OpenClaw/Moltbot founder Peter Steinberger - project to remain open source (Foundation approach) - Android Authority, February 16, 2026: https://www.androidauthority.com/openclaw-peter-steinberger-openai-3641150/
Companies pull the emergency brake: Meta & others restrict/ban OpenClaw due to security fears - WIRED, February 17, 2026: https://www.wired.com/story/openclaw-banned-by-tech-companies-as-security-concerns-mount/
OpenClaw “Skills” ecosystem as a malware distributor: Hundreds of malicious add-ons found on ClawHub - The Verge, February 4, 2026: https://www.theverge.com/news/874011/openclaw-ai-skill-clawhub-extensions-security-nightmare
Big Tech Infrastructure: Meta deploys Nvidia Grace CPUs in production, Vera follows – efficiency leap (perf-per-watt) - Tom's Hardware, February 18, 2026: https://www.tomshardware.com/pc-components/cpus/meta-will-deploy-standalone-nvidia-grace-cpus-in-production-with-vera-to-follow-company-sees-perf-per-watt-improvements-of-up-to-2x-in-some-cpu-workloads
Chip/AI Compute Roadmap: Nvidia presents Rubin platform (CES) + Open Models & Autonomy Blueprint - NVIDIA Blog, January 5, 2026: https://blogs.nvidia.com/blog/2026-ces-special-presentation/
M&A (Legal/Deal Intelligence): Thomson Reuters buys Noetica (AI-native “Corporate Transaction Intelligence”) - Thomson Reuters, February 10, 2026: https://www.thomsonreuters.com/en/press-releases/2026/february/thomson-reuters-acquires-noetica-inc-the-ai-native-platform-for-corporate-transaction-intelligence
Growth & Megafunding: Anthropic raises $30 billion (Series G) - Valuation $380 billion - Crunchbase News, February 12, 2026: https://news.crunchbase.com/ai/anthropic-raises-30b-second-largest-deal-all-time/
🎯 Conclusion
The loudest story wasn't " AI is becoming conscious ," but rather: how easily doomsday narratives about agents can be industrialized , while governance, security design, and compute strategy are discussed only quietly.
👉 If you want, we can develop a customized OAKAI from this.
e-edition for your company with
"What does this mean for small and medium-sized enterprises in Germany, Austria, and Switzerland?"
a checklist “piloting agents safely” ,
and 3 concrete recommendations for action for Q2/2026 : from proof-of-concept to draft guidelines.
Write to us at info@oakai.de or arrange a meeting before your organization only knows agents from media stories instead of from its own, safe experience.

Comments