The Trump administration鈥檚 new plan, Winning the AI Race, sets out how the US government wants to lead in AI. It lays out more than 90 federal actions. These cover everything from sending AI systems overseas, building more data centres, and stripping back federal rules that get in the way of development.
One of the most talked about ideas is the export of full stack AI packages. These bundles include everything from chips and code to apps and standards. The White House wants ‘American-made’ systems to be used in other countries, especially those that the US calls allies.
Inside the country, the plan supports faster approvals for building data centres and chip plants. It also calls for a bigger workforce in areas that are usually overlooked, like electricians and heating technicians. These workers are needed to keep new facilities up and running.
Another section deals with government contracts. From now on, federal departments must buy from AI companies that keep their systems 鈥渙bjective鈥 and free from what officials describe as 鈥渢op-down ideological bias.鈥 The White House claims this is about protecting free speech in frontier models.
What Has The UK Decided To Do With Its Plan?
The UK鈥檚 AI plan, published as the AI Opportunities Action Plan, takes a quieter route. The focus is more on research, long term investment and public projects. It starts with computing power. Government wants to grow its national AI computing supply 20 times bigger by 2030.
This includes building new supercomputers in Cambridge and Bristol. From early 2025, these will be available for researchers and small businesses. The government has also extended the life of its current top computing system at Edinburgh University until late 2026.
A project called AI Growth Zones is also on the table. These are physical areas where AI-related buildings, like data centres, can be set up more easily. The first one is planned for Culham, at the UK Atomic Energy Authority. It will begin with a 100MW data centre, which could scale up over time. The site will be run through a public-private setup.
Instead of cutting rules, the UK wants to influence how AI fits within its legal system. It plans to announce rules on copyright and safety that support both researchers and the creative sector. Energy use is also part of the conversation. A new AI Energy Council will look into how to power AI projects in cleaner ways, such as using small nuclear reactors or renewables.
More from Artificial Intelligence
- Taiwan’s TSMC Profits Set To Surpass 50% Thanks To AI Chip Demand
- Google And Intel Deepen AI Chip Ties, Indicating That AI Isn’t Just About GPUs Anymore
- The ICO Just Weighed In On AI Agents And Data Protection, Here Is What UK Startups Need To Know
- Sam Altman鈥檚 Robot Tax Plans: What Does It Actually Mean And Who Would It Affect?
- In The AI Age, Do You Still Need To Spend Money On Expensive Phone Cameras?
- Meet Muse Spark, Meta’s AI That Knows You Better Than You Know Yourself
- Mallory Launches AI-Native Threat Intelligence Platform, Turning Global Threat Data Into Prioritised Action
- How Is AI Being Used In Dentistry?
How Do Experts Think The Two Countries Compare?
Arshad Khalid, Technology Advisor at No Strings Public Relations, said, “The UK and US AI plans reflect very different approaches to regulation and innovation. The UK is focusing heavily on protecting users, especially minors, by introducing strict rules like age verification for online content.
“This shows a precautionary approach, prioritising safety and ethical concerns even if it means more regulation. The US plan, especially under the Trump administration, leans towards deregulation and pushing for rapid AI development and global leadership. It focuses more on economic growth and less on stringent controls.
“Both countries share a goal of maintaining technological leadership, but the UK鈥檚 method is more cautious, while the US prioritises speed and competitiveness. There鈥檚 value in both approaches. The US could learn from the UK鈥檚 emphasis on safeguarding users, which is essential to maintain public trust in AI. Meanwhile, the UK might consider the US focus on fostering innovation to avoid stifling development with too many restrictions. Balancing safety with growth will be key for future AI policies worldwide.”
Rhys Merrett, Head of Technology at The PHA Group, said, 鈥淏oth jurisdictions are actively pursuing strategies to become global hubs for AI innovation, moving beyond the exploration stage of AI application to the actual implementation across the private and public sectors. President Trump鈥檚 Executive Order which revokes policies and directives which act as barriers to American AI innovation in favour of US leadership has set the conditions to explore new AI innovations through decentralisation. This comes with added risk, namely a lack of safety and protections, and the challenge of AI innovation lacking direction as to how it should to deliver outcomes.
鈥淲hile nowhere near as extreme, the UK has seemed to follow a somewhat similar approach, prioritising economic growth and innovation over strict regulatory controls or ethical oversight which reflects the EU鈥檚 approach. This was demonstrated it declined to sign the Paris Summit Declaration on Inclusive and Sustainable AI. However, the UK knows it cannot compete with the US on a levelled front, which puts it in an interesting position.
鈥淚n one instance, it could continue to follow the US approach, while integrating elements of the EU鈥檚 strategy which favours regulation to deliver the effective protection and roll out of AI solutions through measured initiatives. It鈥檚 important to acknowledge that as much as the US and UK are like-minded entities, they still consider each other as competitors, meaning they need to each forge different strategies.鈥
Bill Conner, CEO, Jitterbit, says, 鈥淎s we have seen with other disruptive technologies, the competitive AI arms race will soon impact the global economy while influencing technical innovation, productivity, market efficiencies and the actual GDP of countries.
“Investing in AI is critically important, but overly aggressive policy cannot compromise AI accountability, transparency and data privacy. To lead in AI, the U.S. government must lead with principles. Responsible AI governance isn鈥檛 a side note 鈥 it鈥檚 the foundation of lasting global influence.
“Accelerating infrastructure and easing environmental and export regulation bottlenecks may offer the U.S. government an early advantage, but long-term sustainability will depend on the measured and provocative implementation of AI accountability into critical systems at home and abroad.
“This isn鈥檛 only a global AI arms race for processing power or chip dominance. It鈥檚 a test of trust, transparency, and interoperability at scale where AI, security and privacy are designed together to deliver accountability for governments, businesses and citizens. Without clear accountability frameworks, exporting AI risks creating vulnerabilities 鈥 turning a strategic asset into a liability, particularly when adversarial actors are quick to exploit weaknesses or manipulate systems to their advantage.
He continued, 鈥淭he U.K. must find a way to protect fundamental rights without sidelining AI innovation. Regulation should serve as an enabler, not a constraint. The real opportunity lies in building AI accountability frameworks that promote secure, ethical data usage, without paralysing the digital business models that rely on it.
“The U.K. government鈥檚 AI Opportunities Action Plan is a step toward turning AI ambition into actionable efficiencies, but trust will only come from consistent, transparent integration with AI accountability at the core. For governments, the challenge now isn鈥檛 just strategic alignment, it鈥檚 execution at scale, within clear guardrails.鈥