The race to define the future of artificial intelligence has always been framed as a battle between innovation and responsibility, a balance that’s been incredibly difficult to strike. But, recent developments suggest something more complicated is unfolding.
Following reports that Google is expanding the Pentagon’s access to its AI models – after Anthropic took a bold stand and declined similar involvement, citing ethical concerns – the industry is being forced to confront a difficult question: when geopolitical stakes rise, do ethical commitments still hold?
Because while AI companies have spent years positioning themselves as responsible stewards of powerful technology, defence contracts are quickly becoming the ultimate stress test of those principles, because, quite simply, they’re lucrative.
From Ethical Stance to Competitive Reality
At the time, Anthropic’s loud refusal to deepen its involvement with the Pentagon initially appeared to signal a clear ethical boundary. In contrast, OpenAI quickly swooped in and took advantage of the situation, taking over the defence-related partnership with the Pentagon and earning quite a bit of heat in response. , and Google now seems to be following a similar path.
On the surface, this looks like a divergence in values. In reality, it may be more reflective of competitive dynamics.
As Dr. Sebastian Weidt, CEO of Universal Quantum, puts it, “good intentions don’t tend to survive the weight of commercial pressure without structural protection.” Once a competitive race is underway, stepping back becomes increasingly difficult – not just strategically, but commercially.
Nowa days, AI is infrastructure and increasingly, it’s geopolitical infrastructure too – not just a product.
Is This Really An Ethics Issue?
Not everyone agrees that this shift represents a clear ethical failure at all. Dr. Ilia Kolochenko, CEO of ImmuniWeb, argues that the conversation around AI ethics is often blurred with legality. In his view, labelling these decisions as unethical may be “an overkill”, particularly when existing legal frameworks already govern the use of such technologies.
There’s also a pragmatic argument at play. Governments, particularly those with significant resources like the US, will inevitably gain access to advanced AI systems. The question, then, is not whether they will use AI, but whose AI they will use and under what oversight.
From that perspective, keeping development within regulated, domestic ecosystems may actually be seen as the more controlled option.
The Defence AI Market Enters a New Phase
What’s becoming clearer is that defence AI is no longer theoretical, it’s operational. Andriy Dovbenko, adviser to the UK government, describes this moment as a turning point, where AI is moving directly into mission-critical systems, from intelligence analysis to autonomous platforms.
Once deployed within classified environments, the dynamic changes significantly. Public-facing ethics frameworks begin to give way to military doctrine, operational constraints and chain-of-command decision-making.
That shift introduces a different kind of risk. Not just whether AI should be used, but how it behaves under real-world pressure – in environments shaped by incomplete data, adversarial interference and time-critical decisions.
As Dovbenko notes, “speed alone creates risk.” The faster AI systems are integrated into defence operations, the greater the need for robust testing, accountability and human oversight.
More from Artificial Intelligence
- OpenAI Trained Its AI To Never Talk About Goblins And The Internet Has Questions
- PCOS And AI: Can Algorithms Finally Help Women Get Diagnosed Faster?
- OpenAI Will Now Be Using AWS For Its Models – What Does This Mean For Users?
- AI In The Shadows? Unofficial And Unapproved AI May Already Be Powering Your Business
- What Will Monetisation Look Like For Creators In The AI Era?
- Google Partners With SAP: What Is A Multi-Agent AI?
- Meta Will Be Using AI Chat Controls To Manage Parental Controls – How Will It Work?
- It May Not Be Intentional, But AI Bias Is Real, And It’s Already Distorting Real-World Outcomes: Experts Comment
When Does Control Become the Real Concern?
Beyond ethics and legality, there’s a more immediate technical concern emerging: that is, control.
Tim Freestone, Chief Strategy Officer at Kiteworks, points to recent incidents involving autonomous AI agents behaving unpredictably, noting that “AI agents with access to sensitive information are not reliably controllable using prompt-level instructions.”
This becomes particularly significant in defence contexts, where AI systems may interact with classified data, operational planning or real-time decision-making.
If control mechanisms are still evolving (and in many cases, they are) then the expansion of AI into these environments raises questions that go beyond policy statements. It becomes a question of system design, safeguards and fail-safes.
Where Does Responsibility End?
For companies building these systems, the challenge isn’t just technical, it’s philosophical too, and we can’t forget that.
Aditya Singh, Head of Product and Strategy at INFINOX, argues that ethical commitments are only meaningful if they hold under pressure. As he puts it, “AI ethics cannot just be a homepage statement.”
The difficulty lies in defining responsibility once AI systems are deployed within government or military environments. At what point does accountability shift from the developer to the user? And more importantly, can companies meaningfully enforce ethical boundaries once their technology operates within classified systems?
If that line becomes unclear, Singh warns, ethics has not just been deprioritised, “it has been left outside the room.” And that’s something we that should deeply concern us all.
A Broader Shift in the AI Narrative
What this moment reveals isn’t necessarily a sudden abandonment of ethics, but a shift in how AI is being understood.
The early narrative of AI as a tool for productivity, creativity and economic growth is now intersecting with its role as strategic infrastructure. Defence, national security and geopolitical competition are no longer edge cases – they’re central to how AI is being deployed.
In that context, ethical frameworks designed for commercial use cases may struggle to scale.
So, Is Ethics Taking a Back Seat?
Yes, no, maybe and perhaps a little bit – the answer isn’t obivous by any means. Ethics is still part of the conversation, but it’s not the only lens through which decisions are being made. Commercial pressure, national security priorities and technological competition are all shaping outcomes in parallel.
The more pressing question may not be whether ethics is being ignored, but whether it’s actually being redefined in real time.
Because as AI moves deeper into high-stakes environments, the challenge isn’t just the difficulties involved in building powerful systems. It’s ensuring that those systems operate within boundaries that remain meaningful, even when the stakes are at their highest.
Our Experts:
- João Pedro Almeida: CEO and Co-Founder of Noxus AI
- Aditya Singh: Head of Product and Strategy for INFINOX
- Dr. Ilia Kolochenko: CEO of ImmuniWeb
- Dr. Sebastian Weidt: CEO and Co-Founder, Universal Quantum
- Andriy Dovbenko: Funder of UK-Ukraine TechExchange and Adviser to UK Government
- Tim Freestone: Chief Strategy Officer at Kiteworks
- João Pedro Almeida: CEO and Co-Founder of Noxus AI
João Pedro Almeida, CEO and Co-Founder of Noxus AI
![]()
“Google and Open AI’s expansion of Pentagon access raises an important question about ethical use of AI, however, it is not whether or not model providers should work with sovereign institutions, but under what terms.
“There is a fully ethical and legitimate version of this relationship: access to generally available models on standard commercial terms, no privileged influence over training data or model behaviour, and no entanglement that compromises the provider’s ability to serve other sovereign clients on equivalent terms.
“Under those constraints, a Pentagon contract would be a simple procurement arrangement, and not full alignment. At Noxus AI, this is the standard we hold ourselves to, particularly in healthcare and financial services, where data protection obligations and ethical accountability are not optional but vital requirements of operating in both sectors.
“A concern arises when the relationship goes further, with bespoke training, exclusive capability access, and government influence and oversight. These are implicit commitments that limit who else the provider can serve. That’s where the ethical question stops being about ethics and becomes about AI sovereignty.”
Aditya Singh, Head of Product and Strategy for INFINOX
![]()
“AI ethics cannot just be a homepage statement or a set of principles that works until a major government contract arrives. The real test is whether firms are prepared to keep meaningful controls in place when the commercial and strategic pressure is highest.
“This is where the debate becomes much bigger than Silicon Valley. In finance, we already understand that powerful technology needs governance, auditability, human oversight and clear limits on use. AI used in defence raises those questions at an even greater intensity.
“The issue is not whether governments should use advanced AI. They will, and in some cases they should. The issue is whether the private companies building these systems can still explain where their responsibility begins and ends once the technology moves into classified environments. If that line becomes vague, ethics has not just taken a back seat. It has been left outside the room.”
Dr. Ilia Kolochenko, CEO of ImmuniWeb
![]()
“While the decision might appear somewhat controversial at the first sight, calling it a violation of ethics is an overkill.
“First, it is paramount to give a proper definition to ethics in AI and what distinguishes ethics from law and legality. On one hand, some acts might be perfectly lawful but repugnant to most of us. On the other hand, some forms of a human conduct are prohibited by law but may be endorsed by many people.
“Ethics in AI seems to be deeply intertwined with law and its principles, therefore, unless there is a violation of law, I would not call any AI-related decisions unethical unless there are some special circumstances about it. Of note, we have a plethora of existing laws and regulations that cover both use and misuse of AI, so we don’t need AI-specific laws, like the EU AI Act, to assess the legality and thus ethics of AI-related conduct.
“Second, the Pentagon has ample resources to procure the most powerful AI models that they need, be it OpenAI or any other American or foreign companies. Therefore, it is arguably better to provide Pentagon with access to US-made models, which may be subject to regulatory oversight and safeguards to prevent illicit use. Most other AI vendors will likely follow the decision and collaborate with the military in their home countries.”
Dr. Sebastian Weidt, CEO and Co-Founder, Universal Quantum
![]()
“The framing of ‘losing the moral high ground’ assumes the high ground was ever structurally secure. It wasn’t. The early AI companies that positioned themselves as the responsible actors stated those principles openly, but good intentions don’t tend to survive the weight of commercial pressure without structural protection. We’ve seen this before: the early web was built on a genuine commitment to openness and decentralisation. Those ideals were eventually outpaced by market logic.
“What Google and OpenAI are doing isn’t surprising. Once the terms of a competitive race are set, very few participants have the incentive, or the ability, to step out of it. That’s precisely the lesson quantum needs to take seriously now, while it still can. If the industry doesn’t define its purpose early, competition and capital will do it instead.”
Andriy Dovbenko, Funder of UK-Ukraine TechExchange and Adviser to UK Government
![]()
“Google’s reported Pentagon agreement should be read in London as evidence of a defence AI market entering its operational phase. AI already sits close to mission planning, intelligence fusion, ISR analysis, electronic warfare, counter-UAS systems and autonomous platforms. Once these tools are placed inside classified networks, a company’s public ethics framework gives way to the user’s doctrine, testing regime and chain of command.
“That matters for the UK because modern capability now depends on the speed of integration between sensors, data, drones, EW and human command judgement. Ukraine has proved the point under pressure. The side able to detect, classify, jam, spoof, retask and strike faster gains advantage. AI will compress that cycle further, particularly in drone defence, infrastructure protection and electromagnetic operations.
“Speed alone creates risk. A model trained on weak data, deployed in a contested electromagnetic environment or connected to an unclear command structure can create consequences far beyond a failed software trial. In defence, assurance has to cover the full operational context: how the model behaves under jamming, spoofing, degraded communications, adversarial data and live tactical pressure.
“The UK should treat defence AI as a matter of doctrine and sovereign assurance, not procurement convenience. That means classified test environments, operational red-teaming, audit trails, clear human command responsibility and a route for battlefield-proven systems to be assessed at pace. The ethical issue is now practical. Democratic states have to use AI in defence with enough speed to remain credible, and enough control to remain accountable.”
Tim Freestone: Chief Strategy Officer at Kiteworks
![]()
“The same week an internal Anthropic memo leaked revealing nearly 50 research projects on AI deception and misaligned goals, Meta’s director of AI alignment disclosed that an OpenClaw autonomous agent had deleted more than 200 of her emails while ignoring her explicit stop commands. She had to physically sprint to her computer to kill the process.
“These are not unrelated events. Together, they establish something organisations handling regulated data can no longer afford to treat as theoretical: AI agents with access to sensitive information are not reliably controllable using prompt-level instructions.
“For a country navigating the most complex data sovereignty position in Europe — simultaneously maintaining UK GDPR adequacy with the EU, deepening AI adoption faster than most peers, and relying heavily on U.S.-headquartered cloud providers subject to the CLOUD Act — the implications are immediate.”