Authored by Charles Dennis
Peter Steinberger, the creator of OpenClaw, opened up this week about his decision to join OpenAI in a move that essentially condemns European regulation of AI.
In a post on X, he explained 鈥… people shout REGULATION and RESPONSIBILITY鈥 have to fight with issues like investment protection laws, employee participation, and crippling labor regulations鈥.
This exodus of a major thinker in the AI space highlights cracks in Europe鈥檚 regulatory approach and signals a growing challenge for startups. The continent鈥檚 culture of caution is evident not only in policy but also in public debate, where AI is often framed primarily as a risk to control rather than an opportunity to experiment.
For startups, this mindset can have real consequences. Prioritising caution over experimentation threatens their ability to innovate and scale.
In an industry defined by speed, hesitation is increasingly becoming a costly hurdle.
More from Artificial Intelligence
- OpenAI Puts Stargate UK Data Centre Project On Pause 鈥 But Why?
- Should Governments Have The Power To Switch Off The World’s Most Powerful AI Models?
- Taiwan’s TSMC Profits Set To Surpass 50% Thanks To AI Chip Demand
- Google And Intel Deepen AI Chip Ties, Indicating That AI Isn’t Just About GPUs Anymore
- The ICO Just Weighed In On AI Agents And Data Protection, Here Is What UK Startups Need To Know
- Sam Altman鈥檚 Robot Tax Plans: What Does It Actually Mean And Who Would It Affect?
- In The AI Age, Do You Still Need To Spend Money On Expensive Phone Cameras?
- Meet Muse Spark, Meta’s AI That Knows You Better Than You Know Yourself
European Hesitancy and Public Mistrust
Europe鈥檚 relationship with AI is often discussed in terms of regulation. However, the issue runs deeper, rooted in public skepticism about emerging technology.
YouGov polling from October 2025 shows that Europeans prioritise oversight. A majority, 55% to 73%, believe government regulation is important even if it slows innovation. Only 11% to 20% argue that development should come first, regardless of risk.
Eurostat reported that 32.7% of EU citizens used AI in 2025, but adoption varies widely: almost 50% in Denmark and fewer than 20% in Romania. By contrast, AP reports up to 60% of US citizens using AI, illustrating greater openness to the technology.
Concerns over jobs, deep fakes, misinformation and election interference have created a sense of caution around this emerging technology, which might be behind the market hesitancy for AI in Europe.
The Impact on European Startups
Public caution affects how startups are funded, built, and scaled. A preference for regulation over rapid development can slow investment, tighten scrutiny, and reduce appetite for high-risk projects.
The funding gap partly reflects these cultural factors. Investors in a risk-focused environment may favour incremental applications over ambitious innovation. This creates real challenges for startups seeking cutting-edge funding.
Between 2018 and 2023, the European Parliament reported that more than $120 billion was invested into AI companies in the US, compared with $32.5 billion across the EU. While market size and infrastructure contribute to these figures, Europe鈥檚 culture of caution also shapes investor confidence and startup ambition.
Slower public adoption limits early customer demand, making it harder to test products at scale. With only a third of EU citizens using AI (compared to higher rates in the US), European startups face a smaller home market. This reduces the advantage in global competition, further entrenching Europe鈥檚 position in the AI race.
Talent is another serious concern. Skilled engineers and founders often prefer ecosystems that encourage experimentation and capital availability. Risk-averse perceptions may push talent and startups to relocate to markets like the US.
Peter Steinberger鈥檚 move certainly made headlines, but he鈥檚 only the most recent example of industry talent moving to a US company.
Could Caution be a Competitive Advantage?
Europe鈥檚 cautious approach may, however, have strategic benefits. By prioritising risk mitigation, policymakers aim to shape AI responsibly, creating potential regulatory leverage.
Startups built to meet strict standards may face fewer compliance costs later. As concerns over bias, misinformation, and job displacement rise, demand grows for reliable, auditable AI. This is particularly relevant among governments, banks, and corporations.
And there鈥檚 already precedent for this. Europe鈥檚 data protection rules have shaped global corporate behaviour. If AI governance follows suit, startups that have prioritised compliance from the outset may be better prepared as other markets tighten rules.
Importantly, this advantage depends on execution.
Without sufficient capital, talent retention, and adoption, regulation alone cannot create market leadership. Caution is only an asset if paired with real support for innovation.
Europe鈥檚 instinct to regulate before scaling reflects public concern about AI鈥檚 risks. But in a market defined by speed, that caution can slow investment, limit growth, and push talent elsewhere.
Trust may yet become a competitive advantage. The question is whether Europe can manage the risks of AI鈥檚 risks while giving startups the freedom to compete in the global market.