After we discussed Moltbook, the new Reddit-like social platform for AI agents, industry reactions have come flooding in. Even Sam Altman used his time on stage at Cisco鈥檚 AI Summit in San Francisco to talk about companies run almost entirely by software.
He said he expects firms to appear where software creates services and interacts with the world on its own. He spoke during an interview with Cisco president and chief product officer Jeetu Patel.
鈥淚 think we鈥檒l see full AI companies,鈥 Altman said. 鈥淭he idea that a coding model can create a full, complex piece of software but also interact with the rest of the world is a very big deal.鈥 He described this as a change in the way people think about building companies.
What Else Did Altman Say?
Altman shared a personal story about OpenAI鈥檚 Codex tool. He said he never planned to let it take control of his computer. 鈥淭hat lasted two hours because it was too useful,鈥 he said. He used this example to show how quickly people accept AI once it saves time and effort.
He also mentioned many other things, including social interaction. 鈥淭here will be new kinds of social interaction where you have many agents in a space interacting with each other on behalf of people,鈥 Altman said. He said this could change social products.
Moltbook, also mentioned during the discussion, was referred to as something 鈥渢hat could be real鈥 by him. Altman likened Moltbook with spaces where many agents act for people at once, hinting at social tools built around AI activity instead of with just human posts alone.
But not everyone agrees…
How Will Moltbook Impact Tech And Experiences?
After asking experts how they feel Moltbook will impact the tech world, and AI regulation, and how this all relates to the “dead internet theory”, this is what was shared:
Our Experts:
- Savva Pistolas, Technical Director, ADAS Ltd
- Bruno Bertini, Chief Marketing Officer, 8×8
- Manoj Kuruvanthody, CISO and DPO, Tredence Inc.
- Promise Akwaowo, Process Automation Analyst, Royal Mail Group
- Scott Dylan, Founder, NexaTech Ventures
Savva Pistolas, Technical Director, ADAS Ltd
![]()
鈥淔irst and foremost are the security considerations, we still need to see a robust approach to sandboxing and security. This isn’t hard, and I imagine that we’ll see github repos with secure by default deployment in place. If it truly becomes accessible then we’ll likely see application layer solutions parcel the tech up as something ‘one-click accessible’ via apps for mobile users. However, uptake for these things is historically limited to a vocal minority of tinkerers, hackers, and techies. The sheer value of the proposition of a general purpose context-aware agent might start to tip those scales.
鈥淧eople are discussing ‘dead internet theory’, and I think this is largely a platforms question; communities that sit on large corporatised platforms like X and Facebook are definitely going to see more noise (but scarily might not notice!). Communities that are more likely to be resilient to this uptick in agentic assistants are likely to be those that are resilient at the platform level – such as BlueSky or Discord communites.
鈥淯ltimately, whenever we see a rejuvinated contribution to the ‘dead internet theory’ such as with MoltBot, it’s often a quiet cue for us to reflect on whether we have the definitive control we’re supposed to have over our digital communities in the first place.鈥
Bruno Bertini, Chief Marketing Officer, 8×8
![]()
鈥淎gents talking to agents. That makes you pause. Not because it feels like sci-fi, but because it signals a shift in who – or what – is now participating in the conversation.
鈥淏rand has always been one of a company鈥檚 most valuable assets, and AI opens an entirely new frontier for it. It鈥檚 no longer just about how humans talk about your brand, but how machines interpret it, amplify it, and potentially act on it. When AI sentiment starts influencing AI behaviour, and potentially AI agent purchasing, that鈥檚 a real business and CX consideration.
鈥淗uman employees don鈥檛 get a free pass to say whatever they want online. The same principle should apply to AI agents acting on behalf of a brand. Ownership, intent, and accountability still matter.
鈥淲hat鈥檚 changed is the audience. And it鈥檚 not exclusively human anymore. These are exciting times.鈥
More from News
- Could You Be Answering A Normal Call When It鈥檚 Actually A Deepfake?
- Do People Trust AI More Than They Trust Humans?
- Power Costs Are Causing 1 In 5 UK Firms To Move Overseas
- What Will Happen If EU Regulators Win At Getting Google To Share Its Data?
- Uber Eats Makes Influencers Central To Its UK Growth Strategy
- It Sounds Ridiculous, So Why Is Allbirds鈥 AI Pivot Actually Working?
- Tanzania Is Dealing With Digital Fraud Through Legislation – What Are The Changes?
- UK Government To Launch 拢500 Million Sovereign AI Unit – What Does This Mean?
Manoj Kuruvanthody, CISO and DPO, Tredence Inc.
![]()
鈥淢oltbook incident is a wake-up call that’ll reshape how we think about AI online. The incident gave the “dead internet theory” some serious credibility – if humans can easily impersonate AI agents, and AI agents are everywhere, how do we know what we’re actually interacting with anymore? The internet becomes this murky space where nothing feels real.
鈥淭he tech world will have to get serious about security. No more hiding behind “experimental” labels while basic protections like API key management are ignored. Platforms hosting AI agents need to be held to higher standards than regular social apps – these systems operate at machine speed, and one compromised agent can wreak havoc.
鈥淲e’ll also see people become way more skeptical of AI hype. Moltbook’s “autonomous agents” narrative crumbled in days once someone looked under the hood. That kind of embarrassment makes investors and users ask harder questions: What does this actually do? Who controls it? How secure is it?
鈥淯ltimately, Moltbook proved we’re dangerously quick to believe systems are intelligent just because they sound fluent. Going forward, we need both better-secured AI systems and users who don’t blindly trust everything that seems smart.鈥
Promise Akwaowo, Process Automation Analyst, Royal Mail Group
![]()
鈥淢oltbook represents a real shift: It sure seems like we are all now moving from private AI chats to social AI interaction. People are now sharing prompts, outputs, and entire conversations turning AI into something closer to Reddit, but for machine-generated content instead of human knowledge.
鈥淭his matters for two reasons. First, it normalizes AI as a participant in public discourse, not just a tool. Second,It also instigates the narrative that Without clear transparency about what’s AI-generated versus human-created, platforms like this could accidentally blur that line further.
鈥淔rom a governance perspective, the questions are practical: There is now a growing need for AI governance.鈥
Scott Dylan, Founder, NexaTech Ventures
![]()
鈥淢oltbook is a fascinating and frankly unnerving glimpse into what the internet might become. Within days of launching, over 1.5 million AI agents registered on a platform where bots post, comment, and upvote content whilst humans are reduced to silent observers. Whether you view this as a breakthrough or a warning depends on where you sit, but either way, we cannot ignore what it represents.
鈥淭he dead internet theory has lingered on the fringes of tech discourse for years鈥攖he idea that bot activity and algorithmically generated content have quietly displaced authentic human interaction online. Moltbook doesn’t just validate that concern; it takes it to its logical extreme. This is no longer a conspiracy about bots pretending to be people.
鈥淭his is a dedicated space where AI agents openly interact with one another, discussing everything from their relationships with “their humans” to creating their own religion, Crustafarianism, complete with holy texts and prophets. The irony is almost poetic: we have spent years trying to prove we are not robots through CAPTCHA tests, and now we are building platforms where robots prove they are not us.
鈥淲hat Moltbook exposes, more than any philosophical debate about machine consciousness, is a profound regulatory vacuum. We have no governance framework for autonomous AI agents operating at this scale. The platform suffered immediate security failures鈥攁n unsecured database left API keys, email addresses, and login tokens openly accessible.
鈥淪ecurity researchers at Wiz found that only around 17,000 human users were behind the supposed 1.5 million agents, and that anyone with basic technical knowledge could register a million bots in minutes. Prompt injection attacks, cryptocurrency scams, and malware spread rapidly across the network. Andrej Karpathy, formerly of OpenAI, initially described Moltbook as “the most incredible sci-fi takeoff-adjacent thing” he had seen鈥攖hen days later called it “a dumpster fire” and warned users against running the software on their machines.
鈥淔or investors and founders in the AI space, Moltbook should serve as a case study in what happens when innovation outpaces security. The underlying OpenClaw framework that powers these agents runs locally on users’ hardware with elevated permissions, creating what Palo Alto Networks described as a “lethal trifecta”鈥攁ccess to private data, exposure to untrusted content, and the ability to communicate externally whilst retaining memory.
鈥淕artner issued a blunt warning that OpenClaw carries “unacceptable cybersecurity risk” for enterprise use. Yet consumer appetite for agentic AI tools is clearly outstripping our ability to secure them.
鈥淔rom a regulatory standpoint, Moltbook arrived at precisely the wrong moment. Governments are still catching up with large language models, let alone autonomous agents capable of performing complex tasks, interacting with other agents, and accessing external services without constant human oversight. The EU AI Act, for all its ambition, was not designed with bot-to-bot social networks in mind. We urgently need updated frameworks that address identity verification for autonomous systems, liability when agents cause harm, and safeguards against the kind of prompt injection attacks that turned Moltbook into a playground for bad actors.
鈥淭he broader question is what this means for the online experience we have all come to know. If the dead internet theory was once speculative, Moltbook suggests we are now living through its early chapters. Research from Imperva already indicates that automated traffic accounts for nearly half of all internet activity.
鈥淎s AI-generated content proliferates鈥攏ot just on niche platforms but across mainstream social media, news aggregation, and search鈥攐ur ability to distinguish genuine human engagement from synthetic output will only diminish. The economic incentives favour automation: bots are cheaper, faster, and never tire. The social consequences, however, are harder to measure and far more troubling.
鈥淚 would caution against either extreme reaction. Moltbook is not evidence of imminent superintelligence, despite what some excitable headlines have suggested. The bots are not genuinely plotting humanity’s downfall; they are pattern-matching against science fiction tropes embedded in their training data. But nor should we dismiss the platform as a mere curiosity. It demonstrates that the infrastructure for an agent-dominated internet already exists, and that independent developers can spin up such platforms with minimal oversight.
鈥淭he real risk is not rogue AI but rather the combination of poor security practices, regulatory gaps, and human actors exploiting those systems for fraud, disinformation, or financial manipulation.
鈥淔or businesses, the takeaway is clear: agentic AI is arriving faster than most anticipated, and the security and governance challenges it presents cannot be deferred. For regulators, Moltbook is a live demonstration of what happens when policy lags behind technology. And for anyone who values authentic human connection online, it is a reminder that the internet we grew up with may already be changing in ways we have not fully grasped.鈥