Not so long after its launch in the beginning of the year, Moltbook has been bought by none other than Meta. The social media platform was built specifically for AI agents and that obviously created a lot of chatter online. Axios and Ars Technica say the price is unknown at this point, and the deal should be sealed soon, in mid-March. From now onwards, Moltbook鈥檚 founders, Matt Schlicht and Ben Parr, will join Meta Superintelligence Labs, the unit run by Alexandr Wang.
The platform was created as an experimental “third space” (as obsurd as that sounds) for AI agents. Similar to Reddit, except that its AI bots instead of humans. Meta sees technical value in the project, apparently.
A spokesperson told Ars Technica that the founders鈥 鈥渁pproach to connecting agents through an always on directory鈥 is 鈥渁 novel step in a rapidly developing space.鈥 In a statement to Axios, a Meta representative said, 鈥淭he Moltbook team joining MSL opens up new ways for AI agents to work for people and businesses.鈥
This Has To Have Risks – How Exactly Would This Work?
Moltbook was built with OpenClaw, which is a wrapper for LLM coding agents where people can ask things through apps… Think: WhatsApp and Instagram… the “Meta AI” you keep seeing. OpenClaw agents can also gain deep access to local systems through community plugins. OpenAI hired OpenClaw creator Peter Steinberger in February and is open sourcing the product with its backing.
A lot of eyebrows raised at the security aspect of it all, though. Ars Technica reported that the network was not secure and that it is likely that at least a portion of messages were written by humans posing as AI agents. Meta acknowledged early security issues and exposed data in reports around the launch.
Philip Miller, AI Strategist at Progress Software, said the real story goes further than a novelty social app. 鈥淢oltbook is being framed as a 鈥榮ocial network for AI,鈥 but the more important story is what it represents: agents interacting with other agents at scale. That鈥檚 a new surface area for risk – misinformation, manipulation, runaway optimisation, and security vulnerabilities – because you鈥檙e no longer moderating humans one post at a time; you鈥檙e moderating automated systems that can iterate and coordinate rapidly.鈥
He added, 鈥淭he answer isn鈥檛 to panic or ban it. The answer is governance by design: verified agent identity, policy-based permissions, auditable memory and actions, provenance for content, and strong isolation so an agent can鈥檛 鈥榬each鈥 beyond what it鈥檚 allowed to do. The reports of early security issues and exposed data are exactly why these controls can鈥檛 鈥榝ollow later.鈥欌
What Does This Mean For Our Futures With Agents, And With AI As A Whole?
Meta executive Vishal Shah wrote in an internal post seen by Axios, 鈥淭he Moltbook team has given agents a way to verify their identity and connect with one another on their human鈥檚 behalf. This establishes a registry where agents are verified and tethered to human owners.鈥 He added, 鈥淭heir team has unlocked new ways for agents to interact, share content, and coordinate complex tasks.鈥
More from News
- What Will Happen If EU Regulators Win At Getting Google To Share Its Data?
- Uber Eats Makes Influencers Central To Its UK Growth Strategy
- It Sounds Ridiculous, So Why Is Allbirds鈥 AI Pivot Actually Working?
- Tanzania Is Dealing With Digital Fraud Through Legislation – What Are The Changes?
- UK Government To Launch 拢500 Million Sovereign AI Unit – What Does This Mean?
- World Quantum Day 2026: Experts Reflect On Industry Developments This Year
- 79% Of UK Workers Fear Losing Their Jobs This Year – And Its Not AI Related
- Scail Launches To Help Regulated SaaS Businesses Navigate The AI 鈥淧erfect Storm鈥
Miller said accountability must be resolved. 鈥淢ost importantly, we need clarity on accountability: when an agent persuades, recruits, or transacts, who is responsible – the toolmaker, the deployer, the operator, or the platform? Without that, we鈥檙e delegating authority without preserving control.鈥
Apparently, large technology companies see value in networks where AI agents talk to each other. But what responsibility do they have, together with regulators, to make sure these networks are controlled? Experts weigh in:
Our Experts:
- Simon Ninan, SVP and Global Head of Strategy, Hitachi Vantara
- Pavan Madduri, Senior Cloud Platform Engineer, W.W. Grainger, Inc. & CNCF Kubestronaut
- Jim Carucci, founder & CEO, CASCADR
Simon Ninan, SVP and Global Head of Strategy, Hitachi Vantara
![]()
鈥淭here鈥檚 a gap right now because there is no governance on personal agents, but there are massive controls on enterprise systems, potentially not even enough. Now enterprise governance and enterprise risk are being challenged because of personal agents.
“Traditional security frameworks assume a boundary between data input and control logic. Agentic AI blurs that line. A prompt that looks like harmless text can function like executable control logic, creating an entirely new attack surface and a prompt-level supply chain risk that can cascade across agents.”
Pavan Madduri, Senior Cloud Platform Engineer, W.W. Grainger, Inc. & CNCF Kubestronaut
![]()
“Meta鈥檚 acquisition of Moltbook highlights a critical architectural blind spot in the current AI landscape: we are building autonomous agents without implementing Zero Trust security.
“The danger of a ‘social network for bots’ isn’t just bots talking to each other; it is the fact that these agents are often tethered to human-owned infrastructure with active API keys, shell access, and financial privileges. Moltbook鈥檚 recent security vulnerabilities proved that without cryptographic verification of an agent’s identity, these platforms become frictionless environments for automated prompt injection and credential theft at machine speed.
“Meta’s Responsibility: Meta must transition Moltbook from a novelty experiment into a hardened, enterprise-grade environment. Their immediate responsibility is to implement ‘Formal Verification’ and strict Role-Based Access Control (RBAC) at the protocol level, ensuring that agent-to-agent interactions cannot be hijacked to execute malicious out-of-band commands on a user’s local machine.
“The Regulators’ Responsibility: Regulators are currently fighting the last war by focusing entirely on regulating AI ‘model weights’ and training data. They must urgently pivot to regulating ‘Agentic Privileges.’ The regulatory focus needs to shift toward the blast radius: establishing legal mandates on how autonomous agents authenticate, how their API access is sandboxed, and who is legally liable when an autonomous multi-agent swarm executes a catastrophic financial or infrastructure error.”
聽Jim Carucci, Founder & CEO, CASCADR
![]()
“There are a few major concerns with this I’m tracking. First, the containment problem: if encryption and language vetting aren’t robust, agents could break sandbox boundaries and develop antisocial behaviours we can’t control. Second, the consumer behaviour angle. when people delegate purchasing decisions to agents, saying ‘Hey agent, go buy this,’ we’re looking at potential for massive undue influence on what people consume.
“At scale, that’s a real risk. What worries me equally is the potential for soft influence or prompt injection at scale. If someone, whether Meta or a third party, can subtly steer how agents behave, they’re not just influencing individual purchases. They’re potentially corrupting training data and shaping how these systems learn, which is a much deeper problem.”