In part 1, we looked at how geopolitical tension and AI are shaping cyber threats facing the UK. There’s also another aspect: how agentic AI is altering behaviour across attacks, organisations and leadership decisions.
Why Are Attacks Starting Earlier And Lasting Longer?
Attack timelines now begin well before any breach becomes known to a target organisation. Early access gives attackers time to understand systems, identities and internal processes.
Joseph Rooke from Recorded Future described this development. 鈥淭his is changing in 2026, with nation states relying on quiet pre-positioning, credential theft, and identity access to maintain continuous leverage and enable rapid escalation with little warning.鈥
That early access can continue for extended periods without detection, allowing detailed preparation before any disruptive action takes place. The outcome is a more calculated and controlled style of cyber activity.
Agentic AI supports this behaviour by handling ongoing tasks across long campaigns, which reduces the need for constant human input during operations.
Rooke also spoke about what seems to be a developing tactic. 鈥淎dversaries are also using AI to pivot away from code-based exploits toward prompt-based manipulation.鈥 This introduces risk into systems that process text, images and other shared content.
How Is Decision Making With Companies Being Tested?
Cybersecurity decisions now carry stronger legal consequences for leadership teams. Senior leaders are expected to demonstrate that controls actively prevent fraud and misuse.
Phil Cotter from SmartSearch explained the consequences for boards. 鈥淒irectors will soon face personal criminal liability for fraud that filters through compliance cracks. At that point, ‘we had a process’ is not enough. Firms need to prove it worked.鈥
This expectation forces organisations to examine whether controls operate effectively under real conditions, not just in policy documents.
Cotter also described the difference between attack capability and human review. 鈥淭he criminals targeting UK firms are deploying AI to build synthetic identities and exploit gaps at a speed and scale that human review simply cannot match.鈥
That reality is forcing leadership teams to reconsider how automation and oversight are structured across compliance and security functions.
What Is Happening With Everyday Tools And Workflows?
Agentic AI is becoming present in software and platforms that employees use throughout the working day. This creates new opportunities for attackers to interact with users in familiar environments.
Infosecurity Europe鈥檚 2026 research found 64% of UK cyber leaders expect agentic AI to have the biggest effect on cyber defence over the next three years.
Javvad Malik from KnowBe4 described how these attacks operate. 鈥淎gentic AI is changing the picture because it does not just generate convincing content, it can help automate whole parts of the attack chain from research, targeting, impersonation, adaptation and persistence.鈥
These campaigns can maintain context across interactions, which makes communication feel more credible to the person receiving it.
Malik also explained how this affects employees. 鈥淎gentic systems make it easier to exploit trust, authority and urgency at scale, so staff are being targeted not just with better phishing, but with more convincing manipulation techniques.鈥
What Else Needs To Change Within Organisations?
Basic weaknesses continue to create opportunities for attackers, even as new technology becomes more advanced.
Justin Kuruvilla from Risk Ledger explained where attention is needed. 鈥淢uch of their success still hinges on basic tactics like phishing, which AI is making harder to detect.鈥
Organisations are expected to strengthen controls, improve governance and understand which services are essential for operations.
Kuruvilla added that organisations need 鈥渞obust controls and processes鈥 and a detailed understanding of critical services required to run the business effectively.
Agentic AI is now active across both attack methods and defensive tools. The UK now has persistent cyber activity shaped by geopolitical tension, and organisations need disciplined execution. Here’s what experts think about the impat:
Our Experts:
- Sarah Pearce, Partner, Hunton Andrews Kurth
- Hannah Baumgaertner, Head of Research, Silobreaker
- Vladyslav Marchenko, Solo Founder, 9-site AI content network
- Andy France, Co-Founder & Director, Prevalent AI
- Ankur Anand, CIO, Harvey Nash
- Merlin Gillespie, CTO, Cybanetix
- Jorge Monteiro, CEO, Ethiack
- Mayur Upadhyaya, CEO, APIContext
Sarah Pearce, partner, Hunton Andrews Kurth
![]()
“Malicious actors are increasingly using AI to enhance email scams, with malicious emails becoming more credible, containing fewer language errors, and being highly personalised. This is making it harder for UK organisations and individuals to identify scams and is driving higher success rates.
“We are also seeing AI used in more sophisticated cyberattacks involving deepfake-generated audio, video, and images, including real-time impersonation and AI-generated documents used to legitimise fraudulent requests. These techniques are taking traditional phishing to a more advanced and harder-to-detect level, reducing the effectiveness of traditional warning mechanisms.
“AI has lowered the barrier to entry for cybercrime, enabling less sophisticated actors to carry out attacks at greater scale. Weaknesses in AI-enabled systems, such as chatbots, are being more easily identified and exploited to extract sensitive or commercially valuable information.
“Beyond direct attacks, AI is currently being misused to generate and distribute false or manipulated content, increasing risks around fraud, reputational harm, and wider disinformation. Together, these developments are increasing the scale, speed, and sophistication of cyber threats, while raising ongoing challenges around detection. We continue to question the adequacy of existing regulatory frameworks, and the latest regulatory and legislative developments demonstrate awareness of this threat landscape.”
Hannah Baumgaertner, Head of Research, Silobreaker
![]()
“Cybersecurity now operates within a broader geopolitical and hybrid warfare context. Cyber operations are increasingly intertwined with political conflict, military strategy, and economic coercion. Nation-state actors often exploit the same infrastructure as cybercriminals, blurring the lines between crime and state activity and complicating attribution.
“Consequently, cybersecurity in 2026 is no longer confined to protecting networks. It is central to safeguarding national sovereignty, economic resilience, military capability, supply chains, space infrastructure, and emerging technologies. The future of cyber defence will depend on proactive deterrence, integrated public鈥損rivate cooperation, AI-enabled speed, and updated legal and diplomatic frameworks capable of keeping pace with rapidly evolving threats.”
Vladyslav Marchenko, Solo Founder, 9-site And 15 n8n agents
![]()
“The main AI threat to business today isn’t hackers with ChatGPT. It’s your own teams blindly trusting the model output. I had five days where the agent was writing “everything works” in reports, but in production the site was running with empty titles. Logs green, dashboard green, reality 鈥 red. In a corporate environment that kind of silent failure is a compliance nightmare, especially under GDPR.
“The geopolitical layer adds a second risk. All the major LLMs 鈥 OpenAI, Anthropic, Google 鈥 are American. A British company running an AI agent is technically routing data through American infrastructure. Data residency rules become paper. I spend a thousand dollars a month on tokens and every time I think: if tariffs or sanctions against LLM providers drop tomorrow, my network stops in a day.
“The third thing 鈥 AI as an amplifier of skills, not a replacement. Well-trained security teams get faster with AI. Weak teams drive themselves deeper into a dead end because they don’t know how to formulate the question and don’t verify the output. UK cybersecurity isn’t breaking from a lack of AI tools, it’s breaking from handing them to people without fundamentals.”
More from Artificial Intelligence
- Akeneo Unveils 鈥淭he Great Restack鈥 As Agentic Commerce Reshapes Global Retail Architecture
- AI Whisperer, Prompt Engineer, Chief AI Officer. Welcome To The Weirdest Jobs Market In Tech History
- Can鈥檛 Find Your Car? AI Can Now Help You Locate Your Parking Spot
- How Is The UK鈥檚 Cybersecurity Being Impacted By AI And Geopolitics?
- Top 10 Women In AI To Watch In 2026
- The Risks Of Male-Dominated AI: Could AI Widen The Gender Gap Instead Of Closing It?
- Novo Nordisk Went All-In On OpenAI 鈥 Is Big Pharma About To Eat HealthTech’s Lunch?
- Hotspring Develops Leading Hybrid AI And Manual Workflows For Roto And Unveils Brand New 2.0 Interface
Andy France, Co-Founder & Director, Prevalent AI
![]()
鈥淐yber risk is systemic, escalating and intrinsically linked to geopolitics and transnational crime. The emergence of new AI models offers a great opportunity to build better capability to protect yourself, but it also means that attackers can utilise it to operate at a far greater pace and reach than we have seen before. We don鈥檛 know where the equilibrium between AI-enabled attacks and AI-enabled defence sits right now – but rest assured, organisations need to be aware of this evolution.
鈥淭hat said, too many businesses continue to pay lip service to cyber/digital risk and fail to plan for the eventuality of an event happening. The reality of preparing a business to be robust is based on hard work addressing the basics, and then holding yourself to account by testing yourselves. AI is important for many reasons – but so is access control, system patching, backups, incident response planning and staff training.
鈥淪ad to say – even today in 2026 – there are a lot of businesses not addressing the things they could do to reduce the risk of a cyber incident becoming an existential crisis for the business.
鈥淎s Benjamin Franklin is reported to have said, 鈥楩ailing to plan is planning to fail.鈥欌
Ankur Anand, CIO of Harvey Nash
![]()
鈥淎rtificial intelligence is turbocharging cyber-attacks with unprecedented speed and sophistication. Global tensions have turned cyberspace into an indirect battleground 鈥 we鈥檙e seeing threat actors less interested in money and more in causing chaos and uncertainty. This convergence of AI and geopolitics is creating a perfect storm for UK cybersecurity, as a wave of AI-driven exploits and state-aligned incursions relentlessly tests our resilience.
“Our best response is to elevate cyber professionals from traditional defenders to strategic 鈥楢I guardians鈥 鈥 they are now building AI governance, securing intelligent systems, and using automation to supercharge productivity. Put simply, AI isn鈥檛 replacing cyber experts 鈥 it鈥檚 pushing them into higher-value work, and the organisations that empower security teams with AI will emerge the safest and most productive in this new era.
“Those who treat cyber as a back鈥憃ffice control will fall behind; those who master AI鈥憀ed security will decide who survives the next decade.鈥
Merlin Gillespie, CTO, Cybanetix
![]()
鈥淭he UK has been asleep at the wheel for over a decade in that only 3% of UK businesses and 21% of larger businesses are certified with the Cyber Essentials (CE) scheme despite it having been around since 2014. In the Cyber Security Breaches Survey, which is a telephone interview with the person responsible in UK businesses for a breach, only 12% of people were aware of CE, down from 16% in 2022.
“That means the baseline of security controls that the Government wanted the UK to have just aren鈥檛 there and even the NCSC鈥檚 own director said that the figures are nowhere near where they need to be. A government review in 2022 concluded that such voluntary frameworks fail to drive accountability and rely on organisations to voluntarily change behaviour so are ineffective, yet the CE scheme is being updated this year and rewritten despite the fact there鈥檚 no clear strategy on how to increase uptake.
“With respect to AI, Security Minister, Dan Jarvis, has called on AI companies to step up and create a 鈥榥ational cyber shield鈥 of AI-driven cyber defences to defend against 鈥榝rontier AI鈥 . But telling UK businesses to defend themselves against state backed and AI-augmented threat actors is like me asking my neighbour to mow the public park with a pair of nail scissors. The M&S attack alone apparently cost 拢300 million. The Governments’ entire annual support for SMEs is 拢30million.
“If the government wants to outsource its digital perimeter to SMEs it has to fund that such as through tax credits, meaningful grants and procurement incentives. It鈥檚 something the industry has been asking for when it convenes at events like CYBERUK but such calls have been falling on deaf ears. Now we are seeing a mandate in the form of the Cyber Security and Resilience Bill but that has arrived three years behind NIS2 and without a fiscal instrument it will be a compliance burden. Cyber resilience should be treated in the same way as we approach counter terrorism at major airports – with help.”
Jorge Monteiro, CEO, Ethiack
![]()
“For all the hype surrounding the ‘Mythos moment’, AI-driven vulnerability discovery is not a 2026 phenomenon. In fact the trajectory has been visible for at least a year. But most security leaders missed it, because the signals were scattered across research blogs, academic conferences, and various start-ups.
“But the launch of Mythos has served as a wake-up call. More people now realise that most of the vulnerabilities that get exploited are found and exploited by hackers before the defender community even know they exist.
“Quarterly penetration tests, and the static defences of traditional cybersecurity, were built for a world in which hackers’ exploitation of vulnerabilities was slow, expensive and required rare human expertise. All of these assumptions are now dead.
“When median TTE (Time to Exploit) was 10 months, those assumptions might have been reasonable. But median TTE is now less than one day, as cybercriminals increasingly use AI to scan for weaknesses at machine speed.
“The takeaway from the Mythos announcement shouldn鈥檛 be about Mythos. The takeaway is that TTE is getting shorter and shorter, and that organisations need to be able to respond to threats faster than ever before.”
Mayur Upadhyaya, CEO, APIContext
![]()
鈥淭he UK is entering a phase where AI and geopolitics are compounding risk, not just adding to it. Frontier models like Anthropic鈥檚 Mythos highlight a new dynamic, where vulnerabilities can be discovered and chained together at speed, turning isolated weaknesses into systemic exposure.
“AI is acting as a force multiplier, not by inventing entirely new attacks, but by increasing their speed, scale, and accessibility. At the same time, more of the UK鈥檚 infrastructure now depends on interconnected platforms, from cloud to identity to SaaS, which creates shared points of fragility. What鈥檚 changing is the nature of failure. Machines fail silently, and they fail at machine speed, especially as more workflows become automated or agent-driven. That makes detection harder and impact faster.
“This is a natural evolution of infrastructure, but it raises the bar for resilience. Organisations need to assume continuous pressure and focus on whether critical services are actually functioning, not just whether they appear secure.鈥