Meta has set aside nearly $15 billion for a 49% cut of data specialist Scale AI, The Information said on Tuesday. The transaction also brings Scale AI鈥檚 founder Alexandr Wang to Menlo Park, where he will run a new research unit.
Wang built Scale into a platform that feeds data to OpenAI and the US military.
Meta chief Mark Zuckerberg plans to seat the lab beside the company鈥檚 VR and social media teams, and people close to the deal say he views superintelligence as the next frontier.
How Might Scale AI Change Meta鈥檚 Data Power?
Scale AI built the Scale Data Engine, a tool that gathers and labels information for machine learning teams at OpenAI, Nuro and Harvard University.
Its engine can customise and test specialised agents for defense clients such as the US Army. The same tooling also labels edge-case images such as night time traffic scenes, giving Meta richer material for video models.
Meta already trains large language models at record speed, access to Scale AI鈥檚 annotated streams could speed that work even further.
Bringing Wang鈥檚 platform in-house also spares Meta from needing to use outside vendors during the current rush for data pipelines.
What Is 鈥淪uperintelligence鈥?
IBM says superintelligence is software that thinks past any human.
The IBM guide on superintelligence also puts it that such a system would carry 鈥渃utting-edge cognitive functions鈥 past every human limit.
Today鈥檚 relatively limited AI needs people for each new skill. A super smart system would teach itself across fields. IBM lists language models, multisensory input and neuromorphic chips as milestones… So in Meta’s case, the Scale AI deal brings Meta closer to those parts.
Certain researchers doubt that it is possible. Meta says rising compute and data make the trial worthwhile. The lab will start with language work, then add vision and audio.
What If Superintelligence Never Happens? What, Then, Are The Risks For Investors?
I asked a few experts what they believe the risks are as far as investment in superintelligence goes. Here’s what they said…
Toju Duke, Founder and CEO, Bedrock AI
![]()
“Artificial superintelligence (ASI) as a technology is still highly speculative, and is yet to be proven. It鈥檚 predicted to be introduced after Artificial General Intelligence (AGI) which hasn鈥檛 arrived yet. AGI鈥檚 predictions have ranged from a couple of years to decades.
“While the focus of superintelligence is the ability for an AI to exceed human cognitive abilities across all domains, it fails to address human emotional abilities such as emotional intelligence or self awareness. As it stands, there鈥檚 still no universally agreed-upon definition of superintelligence and despite the impressive capabilities of LLMs, they鈥檙e still far from achieving the capabilities of AGI and still struggle with reasoning, planning and true abstraction (the ability to manage complexity by reducing a problem to simpler, manageable parts).
“There are also several critical issues to consider when thinking of superintelligent systems. The emergent risks and unpredictability of current AI will prove much more difficult to address, including the risk of the systems overriding human controls, or falling into the hands of bad actors, posing a real threat to human existence and national security.
“The alignment problem where ASI has goals and priorities aligned with human values is still under debate if it鈥檒l achieve a real form of superintelligence or merely higher levels of automation. There are also concerns on the amount of computational requirements for ASI where the true processing power might exceed current capabilities, even when combined with advanced emerging technologies such as quantum computing.
While there’s ongoing research and safety efforts on ASI such as value alignment, reward engineering, and continuous monitoring of these systems, mitigations are yet to be proven. Heavy investments in a technology that鈥檚 still highly speculative, unpredictable, and probably unachievable is not advised.”
More from Artificial Intelligence
- Hotspring Develops Leading Hybrid AI And Manual Workflows For Roto And Unveils Brand New 2.0 Interface
- What Do AI Experts Think About Claude Mythos?
- Experts Comment: The EU AI Act Comes Into Force This August 鈥 Will It Help Or Hinder European Startups?
- Chinese Scientists Call For Global AI Governance 鈥 What Would This Mean For Tech Startups Around The World?
- Is OpenAI Moving Into The Cyber Defence World Next?
- Experts Comment: AI Detectors Are Doing More Harm Than Good 鈥 Here Is How Education Should Actually Respond
- Why Are So Many Mega Influencers Creating AI Clones To Replace Them?
- OpenAI Puts Stargate UK Data Centre Project On Pause 鈥 But Why?
Cahyo Subroto, Founder, MrScraper
![]()
“If superintelligence doesn鈥檛 arrive soon, or doesn鈥檛 arrive at all, I think the risk to investors isn鈥檛 just the loss of a moonshot. But also the cascading effect on the entire capital stack built around that promise.
“Let me explain what I mean.
“Many startups today aren鈥檛 just betting on AI progress, they鈥檙e pricing in future breakthroughs as if they鈥檙e guaranteed. If superintelligence stalls, valuations tied to that horizon will be the first to fall. But it won鈥檛 stop there. Those teams built to chase that vision may become over-resourced and under-leveraged.
“The product roadmaps may be misaligned with what鈥檚 actually feasible. And the timelines for monetisation may be pushed back so far that early investors are forced to exit at a loss or face a liquidity drought.
“This kind of risk compounds quietly, because it鈥檚 not about a single failed product but portfolios shaped by an assumption that the technology curve will bend fast enough to justify the burn. If that curve flattens, a lot of high-conviction bets will turn into long hauls with no clear exit. And that鈥檚 where investors get stuck鈥 not because they were wrong about the potential, but because they misjudged the pace.”
David Nicholson, Chief Technology Advisor, The Futurum Group
![]()
How reliable is investing in this tech?
“Picking a winner is a gamble. Creating a basket of companies in AI as an investment is a safer bet. Artificial General Intelligence or Super Intelligence seems to be the answer to “How can I make money or save money TODAY with all of these magical things that have come out this year?” That answer? “Just wait for Super Intelligence!”
What risks might investors face should it not arrive soon enough?
“This is a question of timing. The narrative that is being spun is that Meta and Zuck are hunkering down in war rooms with the best human minds that unlimited money can buy. The details are exciting. Rumors of 9-figure pay packages for top engineers. People living in Zuckerberg’s homes. It is reminiscent of Elon Musk famously sleeping at the Tesla factory.
“The question is how long an investor will continue to buy into “the dream”. Tesla shareholders seem to believe in Tesla’s robotics and AI vision. They believe. Will Zuckerberg be able to sustain irrational beliefs without breakthroughs in line with the massive spending being reported? We will know within a year. I would expect a roller coaster ride.”