Archives: Find Articles and Guides on "Tech" - 91̽ /category/tech/ Startup News UK and Tech News UK Wed, 15 Apr 2026 12:59:38 +0000 en-GB hourly 1 https://wordpress.org/?v=6.8.5 /wp-content/uploads/2023/04/cropped-techround-logo-alt-1-32x32.png Archives: Find Articles and Guides on "Tech" - 91̽ /category/tech/ 32 32 Robots That Understand The World Are Coming – Google DeepMind’s Latest Model Is A Big Step Closer /tech/robots-that-understand-the-world-are-coming-google-deepminds-latest-model-is-a-big-step-closer/ Wed, 15 Apr 2026 12:42:39 +0000 /?p=149294 Watch enough robot demos and you start to notice what they all have in common: nothing goes wrong. Put the...

The post Robots That Understand The World Are Coming – Google DeepMind’s Latest Model Is A Big Step Closer appeared first on 91̽.

]]>
Watch enough robot demos and you start to notice what they all have in common: nothing goes wrong. Put the same system in the real world – a box in the wrong place, a light that changed, a gauge it had never seen before – and the gap between demo and deployment becomes very clear, very fast.

That chasm between demo and deployment has been the defining limitation of robotics for years. Google DeepMind’s release of Gemini Robotics-ER 1.6 on 13 April 2026 is a serious attempt to close it.

The model is described as a major improvement over its predecessor in spatial and physical reasoning, the cognitive layer that lets a robot understand where things are in three-dimensional space, how they relate to each other and what is likely to happen if it interacts with them. It was developed in collaboration with Boston Dynamics and is available via Google AI Studio, meaning startups building in the physical AI space can access it through an API without needing to train a model of comparable scale from scratch.

This is a research and infrastructure development, not a consumer product launch. Understanding what it actually does is more important than the headline.

 

What Gemini Robotics-ER 1.6 Actually Does

 

The model acts as a high-level reasoning layer for robots, sitting above the lower-level systems that handle physical movement. Rather than directly controlling a robot arm, it processes visual input from cameras, applies spatial reasoning and produces instructions that lower-level systems execute. Think of it less as the robot’s muscles and more as its ability to understand what it sees and decide what to do next.

The specific improvements in version 1.6 paint a clearer picture than most model release notes. The model shows significant gains in precise pointing, which means identifying the exact spatial relationships between objects, such as which items would fit inside a container or which can be safely moved given weight or liquid constraints. It improves on counting occluded objects, which is the ability to reason about items that are partially hidden. It handles multi-view reasoning better, synthesising input from multiple cameras to build a more accurate picture of a dynamic scene.

The standout new addition is instrument reading: the model can now interpret analogue gauges, sight glasses and similar industrial instruments by combining zooming, pointing, code execution and general world knowledge. This capability was developed specifically with Boston Dynamics for facility inspection use cases. It represents a concrete step toward robots that can operate usefully in real industrial and physical environments without requiring every instrument to be retrofitted with a digital interface.

Safety reasoning has also been improved – the model outperforms its predecessor on adversarial safety benchmarks, including hazard identification from injury reports, by six to ten percentage points compared to Gemini 3.0 Flash. For robots operating in environments where humans are present, those numbers mean something in practice.

 

Benchmarks Are One Thing – Here’s The Bigger Picture

 

The robotics industry has a long history of impressive benchmark results that do not translate into useful products. What gives Gemini Robotics-ER 1.6 it’s star quality goes beyond the performance improvements but what those improvements represent architecturally.

Current robotics systems are mostly brittle. They work well in highly structured environments where the variables are predictable and controlled – warehouses with standardised shelving, manufacturing lines where the same component appears in the same position every cycle. The moment conditions deviate, performance degrades rapidly. The path to truly useful general-purpose robots runs through what researchers call embodied reasoning, the ability to build and update a causal model of the physical world in real time.

Gemini Robotics-ER 1.6 moves in that direction. Better spatial reasoning, multi-view synthesis and the ability to read instruments it has never encountered before all contribute to a system that is less dependent on having seen an exact scenario during training. That is not the same as general-purpose autonomy, but it represents progress toward the kind of AI-driven automation that can operate in environments which have not been purpose-built for robots.

 

Good News, Bad News, Or Both? Depends On Your Startup

 

For startups building in the physical AI and robotics space, a Google DeepMind model at this level creates a familiar dynamic: a powerful foundation model becomes available as infrastructure, raising the floor for what is possible while simultaneously raising the bar for what a specialist startup needs to offer.

The API access via Google AI Studio opens a door that was previously shut for most teams. Startups working on vision-language-action models for dexterous manipulation, workflow planning or facility inspection can now build on top of a reasoning layer they could not have trained themselves. That accelerates development timelines and reduces the compute investment required to reach a viable product. In categories like warehouse automation, logistics and industrial inspection, this could substantially compress the time between prototype and deployment.

The competitive angle is whether access to the same foundation model levels the playing field or concentrates advantage with the best-resourced companies. BCG has noted that as VLA models become the programming paradigm for robots, the competitive advantage shifts toward those who can accumulate specialised domain data and hardware integration expertise, rather than those who can train the largest models.

A startup with proprietary data from a specific industrial vertical, or deep integration with a particular hardware platform, cannot be simply replicated by a better foundation model.

Google DeepMind is, in effect, building the infrastructure layer of physical AI. For the startups operating on top of that infrastructure, the question is the same one facing SaaS founders watching a platform expand into their category: is the foundation model a threat to your product, or the enabler of a better one?

 

Let’s Not Get Too Carried Away

 

Gemini Robotics-ER 1.6 is a big step forward, but it does not resolve the fundamental challenge of deploying robots in unstructured real-world environments.

Benchmark performance and reliable autonomous operation in the real world remain two very different things. The distance in a factory, hospital or home is still considerable. Boston Dynamics has been building and deploying physical robots for decades and the limitations are well understood: hardware reliability, edge cases that training data failed to cover, the cost of failure in safety-critical contexts.

What the model does represent is a credible signal about the trajectory. The reasoning capabilities that make robots useful in the real world are improving at a pace that was not expected even two or three years ago.

For founders building at the intersection of AI and physical hardware, that signal is the relevant data point, not any single model release

The post Robots That Understand The World Are Coming – Google DeepMind’s Latest Model Is A Big Step Closer appeared first on 91̽.

]]>
Why Are 41% Of Tech Workers Constantly Facing Monthly Burnout? /tech/why-tech-workers-monthly-burnout/ Wed, 15 Apr 2026 08:05:40 +0000 /?p=149248 Manchester Tech Week has teamed up with Manchester Mind to look at mental health in tech and honestly, the numbers...

The post Why Are 41% Of Tech Workers Constantly Facing Monthly Burnout? appeared first on 91̽.

]]>
Manchester Tech Week has teamed up with Manchester Mind to look at mental health in tech and honestly, the numbers explain why this conversation keeps coming back. Research from Bupa found that 41% of tech professionals experience burnout at least once a month. We’re no longer just talking about the occasional bad week because it has become something that keeps returning.

The research is saying that 70% of tech leaders have dealt with mental health issues in the past year. That’s a problem shared at every level, from junior staff to senior management. It also affects how teams work, especially when deadlines build and workloads grow.

Companies are also losing time with data showing an average of 44 working days per employee are lost each year due to people working while unwell and burnout. Poor mental health is now costing UK companies £51 billion each year. Deloitte reports that workplace mental health support brings back £4.70 for every £1 spent through better productivity and fewer absences.

Charlotte Ulett from Manchester Mind connects these findings to real experiences. She said, “We know that pressure, stress and imposter syndrome can lead to employee burnout and mental health struggles. That’s why it’s so important for us to work with MTW and its core events to support businesses to create healthier workplaces and build mental health resilience in their workforce. We are looking forward to speaking with the tech community to find out how we can better support improved mental health in the tech sector”.

 

What Is It About Tech Work That Wears People Down?

 

The “Behind the Screens” campaign shares lived experiences from people working in tech, and those voices describe how the job can feel. John Wallworth, Information Security Officer at System C Healthcare, said, “In an industry that never sleeps, remember that your capacity to help others is entirely dependent on how well you’ve sustained yourself.”

That sense of constant work does not end when the day finishes. Benjamin Tucker, Cyber Security Operations Lead at Forge Holiday Group, said, “Passion for your work is a strength, but taking it home every night turns a strength into a strain.”

 

 

Senior roles bring added responsibility. Stuart Peet, Associate Director of Customer Service at BOC LTD, said, “Stepping into senior roles taught me that it’s okay to feel overwhelmed; what matters is giving yourself permission to breathe.” That reflects how expectations can build over time and affect how people cope.

There is also a close connection between personal wellbeing and job performance. Jackson Dyson, Head of Data Platform at the Information Commissioner’s Office, said, “Looking after yourself first is the most effective way to look after your job.”

 

How Are People Coping When It Hits?

 

The campaign also shares how workers deal with difficult days when burnout starts to build. Rebecca Fox, Founder and CTO at Relentica, said, “Some days knock you down. Then I think about the people I love – and that thought lifts me up again.” It is a reminder that support often comes from outside work.

Manchester Mind is bringing more practical solutions into Manchester Tech Week 2026, which’ll be from 27 April to 1 May during Stress Awareness Month. Sessions will look at ways to handle workload and stress, using tools such as mindfulness and meditation to help people feel more in control at work and at home.

Gloria Sandrucci, Event Director at Manchester Tech Week, addressed the culture around this. She said, “In the tech industry, we talk a lot about innovation and performance, but not always about the pressures that sit behind them. This is something I feel strongly about, and this partnership is about creating space for more open conversations, supporting our community, and making sure wellbeing is part of how we define success in tech.”

The piece reports what the data and speakers say. It brings together research and lived experiences to explain why burnout keeps returning for many people working in tech.

The post Why Are 41% Of Tech Workers Constantly Facing Monthly Burnout? appeared first on 91̽.

]]>
UK Regulators Are Warning Banks About Claude Mythos Security Risks – Here Is What Fintech Startups Need To Know /tech/uk-regulators-are-warning-banks-about-claude-mythos-security-risks-here-is-what-fintech-startups-need-to-know/ Tue, 14 Apr 2026 14:02:56 +0000 /?p=149172 The AI model designed to make financial infrastructure safer has ended up making regulators nervous. According to the Financial Times,...

The post UK Regulators Are Warning Banks About Claude Mythos Security Risks – Here Is What Fintech Startups Need To Know appeared first on 91̽.

]]>
The AI model designed to make financial infrastructure safer has ended up making regulators nervous.

According to the Financial Times, UK authorities are preparing to warn major banks, insurers and stock exchanges about cybersecurity risks linked to Anthropic’s Claude Mythos Preview model, with a formal briefing expected within the next two weeks through the Cross Market Operational Resilience Group. The Bank of England, the FCA, HM Treasury and the NCSC are all involved.

The model at the centre of this is the same one Anthropic deployed for Project Glasswing, its AI cybersecurity initiative that gave selective access to partners including Apple, Amazon, Microsoft, CrowdStrike and Google to find and fix vulnerabilities in their own systems. The idea was straightforward: use a powerful AI model to hunt for security flaws faster than human researchers can. The problem is that the same capability can just as easily be turned against the systems it was built to protect.

That dual-use reality is what has UK regulators moving quickly, as a pre-emptive warning.

 

What Claude Mythos Preview Actually Does

 

Claude Mythos Preview is described as a frontier model with the ability to autonomously scan codebases for software vulnerabilities, including flaws that have gone undetected for years.

Anthropic’s own system card confirms this capability, and early results from Project Glasswing have been significant: the model has reportedly surfaced a 27-year-old flaw in OpenBSD, one of the most security-focused operating systems in widespread use.

That’s exactly what a security team wants to see, and exactly why regulators are paying attention. A model that can find a 27-year-old vulnerability in a hardened system, faster and more thoroughly than any human researcher could, isn’t a tool you want in the wrong hands. UK financial infrastructure runs on legacy code that has accumulated decades of technical debt. If Claude Mythos Preview can map those systems the way it mapped OpenBSD, the exposure is significant.

For context, the CMORG meeting isn’t being called because Anthropic has done anything wrong. Project Glasswing is explicitly a defensive initiative, but regulators are now grappling with a question the tech industry often avoids: what happens when the capability you built for defence becomes the template for offence?

Why UK Financial Regulators Are Moving Now

 

The timing is intentional – regulators want financial institutions to act before attackers can exploit the same AI-driven vulnerability discovery capability.

The concern is that once a capability like this exists and is known to work, the techniques propagate – state actors, criminal groups and opportunistic attackers all pay attention to what frontier AI models can do.

The CMORG briefing is expected to push banks and fintechs toward a new operational standard: treat powerful AI security tools not just as technical upgrades, but as high-risk components of operational resilience frameworks that require explicit governance, access controls and coordination with national cybersecurity authorities.

This exists alongside a pattern of UK regulatory activity that has been building for months. From the ICO’s guidance on agentic AI to the FCA’s increasing scrutiny of AI in financial services, UK regulators have been moving steadily toward a framework where powerful AI tools in regulated environments carry explicit compliance obligations.

Claude Mythos accelerates that timeline.

 

The Double-Edged Reality Of Powerful AI Security Tools

 

The Claude Mythos episode exposes a structural tension the AI industry has been slow to confront directly. The same capability that lets a defender find and patch a vulnerability faster than ever also lowers the barrier for an attacker to do the same thing.

Cybersecurity has always had this problem – penetration testing tools, exploit frameworks and vulnerability scanners have always cut both ways. What’s new with models like Claude Mythos Preview is the scale, the speed and the autonomy. A human penetration tester can scan one system at a time. An AI model can scan thousands simultaneously, without fatigue, without missing patterns a human might overlook.

The UK’s financial sector is a particularly attractive target precisely because of its interconnection. A vulnerability in one institution’s legacy infrastructure can have cascading effects across clearing systems, payment rails and settlement networks. The regulators enforcing these standards understand this better than most, which is why the CMORG briefing is happening at this level of seniority.

 

What This Means For Fintech Startups Right Now

 

For early-stage fintechs and any business running AI in regulated environments, the Mythos warning merits attention.

Regulators are moving toward expecting explicit governance around how AI models are used to scan or modify production code, tight access controls for high-risk AI tools connected to core systems, and third-party model risk assessments that account for autonomous vulnerability discovery, not just benchmark performance.

That last point needs unpacking – most current AI risk assessments for financial services focus on bias, explainability and data protection. The Mythos warning introduces a new category: what can this model do to the systems it’s connected to, and what could it do if misconfigured or accessed by the wrong party? For startups building on AI infrastructure in regulated sectors, that question now needs an answer before deployment, not after.

The real message is that the AI security arms race has arrived in UK financial services. The institutions that wait for regulators to force their hand will be behind. The ones that build governance frameworks now, before regulators require it, will be in a significantly stronger position when the formal rules land.

The post UK Regulators Are Warning Banks About Claude Mythos Security Risks – Here Is What Fintech Startups Need To Know appeared first on 91̽.

]]>
Canva, Adobe And Figma All Want To Own Your Creative Workflow – Where Does That Leave The Startups Already Building In This Space? /tech/canva-adobe-and-figma-all-want-to-own-your-creative-workflow-where-does-that-leave-the-startups-already-building-in-this-space/ Mon, 13 Apr 2026 09:13:55 +0000 /?p=149055 There’s a word for what’s happening in creative software right now, and it’s not “competition” – it’s consolidation. In October...

The post Canva, Adobe And Figma All Want To Own Your Creative Workflow – Where Does That Leave The Startups Already Building In This Space? appeared first on 91̽.

]]>
There’s a word for what’s happening in creative software right now, and it’s not “competition” – it’s consolidation.

In October 2025, Canva launched its Creative Operating System, the biggest evolution of its platform to date, positioning it as an all-in-one layer across design, video, email, forms and collaboration. Alongside this, it made the Affinity suite permanently free for all users, turning professional design tools into a standard part of its freemium stack. Adobe, meanwhile, embedded Photoshop, Adobe Express and Acrobat directly inside ChatGPT, turning its specialist tools into an always-available plug-in layer inside the most widely used AI interface in the world. Figma went public at over $19 billion and immediately signalled that investors expect it to grow well beyond its original screen-design core.

Each of these moves, on its own, would be significant. Together, they describe something more structural: three of the most powerful platforms in creative technology all racing to become the place where every piece of creative work is made, managed and published. Brand assets, video, marketing campaigns, product design, content production – all of it, inside their platform.

For the startups building in this space, the picture isn’t pretty. Illustration tools, motion design platforms, brand asset libraries and content production workspaces are no longer standalone categories. They are features on another company’s pricing page, bundled at low or zero marginal cost to capture the mid-tier creative market that all three platforms are fighting over.

 

Three Platforms, Three Very Different Bets

 

Canva didn’t just update its product. It tried to replace the entire category.

The platform launched a proprietary AI design model trained on its own assets, generating editable multi-layer outputs rather than flat images, and integrated AI assistants across its workspaces. Making the Affinity suite permanently free is a direct challenge to Adobe’s subscription pricing and turns professional-grade layout and photo editing tools into a default part of Canva’s free tier. There is no question about the target: the educators, SMBs and marketers who might otherwise pay for multiple specialist tools or Adobe’s Creative Cloud.

Adobe’s response is strategically different. Rather than trying to out-Canva Canva on accessibility, Adobe is becoming infrastructure. Embedding Photoshop and Acrobat inside ChatGPT means Adobe’s tools are present wherever users already are, reducing the friction that historically pushed casual users toward simpler alternatives. For startups whose value proposition is in quick social media editing or simple document workflows, this is a direct threat because Adobe has become harder to avoid.

Figma’s IPO reinforced its positioning as the central platform for collaborative product design, but the investor expectation is growth beyond that core. Figma has expanded via plugins, integrations and embedded capabilities that let teams manage design systems, hand off specs to developers and generate basic marketing assets, all within the same environment. That expansion puts pressure on every best-of-breed tool in illustration, motion and brand asset management that isn’t already plugged into Figma’s platform.

So, Your Entire Category Is Now A Feature

 

The dynamic unfolding in creative software has a name in venture capital: category collapse.

It happens when a platform large enough to bundle a ‘good enough’ version of a specialist capability does so, not because it’s better than the standalone product, but because the cost of switching away from the platform is higher than the quality gap. It’s a pattern familiar across the software industry, and creative tech isn’t exempt from it.

The harsh reality for founders in this space is that the platform bundling their category probably isn’t trying to kill their product – it’s trying to retain its own users. When Canva adds basic motion templates, it doesn’t need to match a dedicated motion design tool. It just needs to be good enough that a mid-tier creative team doesn’t open a second app. That standard is lower than most specialist founders assume, and it’s dropping as AI reduces the cost of building ‘good enough’ features.

The founders most at risk are those whose entire value proposition sits in a single feature a larger platform can replicate. The ones best positioned are those who have built around a specific workflow, a specific user type, or a level of depth that a general-purpose tool structurally can’t match, because matching it would compromise the simplicity that makes the platform work for everyone else.

 

The Playbook For Surviving Platform Consolidation

 

The responses that are working fall into three categories.

The first is going deeper: animation-centric tools that model physics, brand management platforms with governance controls sophisticated enough for enterprise compliance, and illustration environments built around specific professional workflows. These are hard for general platforms to replicate because doing so properly would make the platform harder to use for the majority of users who don’t need that depth.

The second is becoming part of the platform rather than competing with it: building as a Figma plugin, a Canva integration or an Adobe extension repositions the specialist tool from standalone competitor to pro-tier add-on. The revenue model changes, but so does the distribution problem. Instead of fighting for attention against a platform with hundreds of millions of users, the specialist tool rides on top of that distribution and captures the subset who need more.

The third is targeting the verticals that the big platforms don’t serve well: education, local government communications, specialist creative industries, regulated sectors where brand compliance requirements are too specific for a general-purpose tool. These verticals often have smaller total addressable markets, but they also have lower platform competition and higher switching costs once a product is embedded in a team’s workflow.

 

Deep Enough To Survive, Or Just Waiting To Be Acquired?

 

The truth is: it depends on the niche.

The creative tools market has a long history of specialist products surviving platform consolidation by going further than the platform will follow. Professional illustration tools have coexisted alongside Adobe for decades, not because Adobe couldn’t build a competitor, but because the depth of their feature set serves a workflow Adobe wasn’t optimised for.

What changes now is the speed at which AI lowers the cost of ‘good enough’. When building a serviceable version of a specialist feature required months of engineering work, the platform calculus was different. A capable AI-assisted implementation can now be shipped in weeks. That compresses the window specialist startups have to establish depth before a good-enough alternative lands inside the platform they’re competing with.

The founders who’ll come out of this well are those who understand that depth alone isn’t enough. The advantage is in the workflow, the community, the integrations, the data: the things that don’t transfer when a user switches platforms. A deeply capable tool someone can pick up and put down is still a product. A tool that becomes embedded in how a team works is a business. Right now, that’s the only kind of creative tech startup worth building.

The post Canva, Adobe And Figma All Want To Own Your Creative Workflow – Where Does That Leave The Startups Already Building In This Space? appeared first on 91̽.

]]>
These Tech Jobs Are Paying The Most In 2026 /tech/tech-pay-by-country/ Thu, 09 Apr 2026 12:00:58 +0000 /?p=148952 Global pay for tech work puts a spotlight on where exactly skills earn the highest rewards. Data from Hays’ Tech...

The post These Tech Jobs Are Paying The Most In 2026 appeared first on 91̽.

]]>
Global pay for tech work puts a spotlight on where exactly skills earn the highest rewards. Data from Hays’ Tech Talent Explorer compares average salaries in 34 countries, covering both permanent jobs and contractor rates. A small group of countries lead the rankings, helped by high hiring needs and a limited supply of specialist talent.

The report explains that pay levels come from supply and demand. AI brings a new norm in how tasks are completed, but it doesn’t take away the need for skilled workers. It supports routine work and leaves people to handle the important duties that need human oversight.

This creates competition between countries trying to attract top talent. Higher salaries and high contract rates are used to bring in experienced professionals for both long term roles and short term projects.
 

Top Countries For Permanent Tech Salaries (Hays Tech Talent Explorer):

 

  1. United States
  2. Switzerland
  3. Denmark
  4. United Arab Emirates
  5. Saudi Arabia

For contractor day rates, the ranking changes:

  1. Switzerland
  2. Denmark
  3. Australia
  4. Germany
  5. Japan

These rankings come from Hays’ analysis of mean average salaries across all assessed roles. Countries like Switzerland and Denmark appear in both lists, pointing to high pay for both employees and contractors.

The US is at the top for permanent roles, and this is all thanks to its massive tech sector. Germany and Japan, on the other hand, rank higher for contract work, where companies pay more for short term expertise.
 

How Does The UK Compare Here?

 
The UK ranks 15th out of 34 countries for average tech salaries and 16th for contractor day rates, according to Hays. This places it in the upper half of the global table.

The report says that AI does not seem to be cutting wages. Pay levels come from supply and demand as well as how important or essential certain jobs are. Areas such as cloud computing typically have high salaries.

Software roles are more exposed to AI tools and even then, the effect is limited. Tasks change, though jobs do not disappear. Workers are needed to guide systems, check outputs and manage complex work.

David Curtis, STEM Senior Managing Director at Hays UK and Ireland, said, “The findings of our report clearly show that AI isn’t replacing human talent but amplifying it. As automation accelerates routine tasks, the roles that thrive are those grounded in judgement, coordination and strategic oversight.”
 

 
He added, “For professionals, this creates significant opportunities to build future ready careers by developing adaptable, strategic and tech augmented skill sets.”

In the UK, certain roles earn more than others. Solutions Architects earn an average of £84,249, Security Engineers earn £75,702 and DevOps Engineers earn £67,532. Contractor rates are also high, with Java Developers earning £695 per day, Cloud Engineers £684 and Security Engineers £659.

Lower paid roles such as Project Manager at £41,736 and Data Analyst at £44,415 tend to have larger talent pools and more standard entry points.
 

Which Tech Jobs Are Paying The Most In 2026?

 
Data from ZTM, based on job listings from Indeed, LinkedIn and Web3 Career, ranks the highest paying tech jobs in 2026. The list places AI related jobs at the top, followed by data and infrastructure positions.

Before the ranking, one thing in particular can be seen. Jobs that are related to AI, data and cloud systems are near the top due to the level of skill needed and limited supply of experienced workers.

There is also a big pay difference between jobs. The highest paid job earns more than $80,000 above the lowest in the list, which confirms that specialisation will typically improve salaries.

The ZTM data reflects the same (or similar, at least) as the Hays numbers. Jobs that need more technical expertise tend to earn higher pay in many markets.
 

Highest Paying Tech Jobs In 2026 (ZTM Data):

 
AI and Machine Learning Engineer: $195,425
Data Engineer: $178,769
Blockchain Developer: $167,893
AI Developer: $155,257
Cloud Engineer: $154,788
Software Engineer: $143,556
DevOps Engineer: $141,226
Cybersecurity Professional: $126,653
Full Stack Developer: $115,887
Mobile Developer: $109,976
Product Designer (UX/UI): $109,533

AI and machine learning engineers earn the highest salaries. This has to do with the need for systems that can process data and automate advanced tasks.

Data engineers come after, as we know that they build and manage the systems behind those tools. Blockchain developers also rank high, pointing to continued use of decentralised tech.

Jobs such as mobile developers and product designers earn less than the top positions, though they still come in above $100,000 on average.

The money in tech is clearly still good and even though, as mentioned, AI has changed how tasks are being completed, skilled professionals still controlling how the work is done.

The post These Tech Jobs Are Paying The Most In 2026 appeared first on 91̽.

]]>
Apple Has Delayed Its Foldable iPhone Again, Is The Tech Giant Losing Its Edge In Hardware Innovation? /tech/apple-has-delayed-its-foldable-iphone-again-is-the-tech-giant-losing-its-edge-in-hardware-innovation/ Thu, 09 Apr 2026 09:30:44 +0000 /?p=148906 Apple’s foldable iPhone has a problem, and it’s not the kind you fix by ordering more components. Reports this week,...

The post Apple Has Delayed Its Foldable iPhone Again, Is The Tech Giant Losing Its Edge In Hardware Innovation? appeared first on 91̽.

]]>
Apple’s foldable iPhone has a problem, and it’s not the kind you fix by ordering more components.

Reports this week, citing Nikkei Asia and supply chain sources, confirm that the company’s first foldable device is encountering more engineering and production issues than expected, with first shipments potentially pushed from late 2026 into early 2027. Apple shares fell between four and five per cent on the news, making the company one of the Dow’s worst performers that day.

These aren’t component shortage issues; they’re described as design and engineering snags, and that detail is important. It means the delay isn’t just about supply chain timing but about the product itself not being ready in the way Apple needs it to be. The company is reportedly also grappling with tight memory chip supply across its high-end lineup, which adds pressure to get the foldable right rather than ship it fast.

On the surface, this looks like a straightforward setback. But there’s a more interesting question underneath it: is Apple being disciplined, or is it hesitating? And in a market where Samsung, Huawei and a wave of Chinese manufacturers have been shipping foldable phones for years, does the distinction still matter?

 

Apple’s ‘Late But Dominant’ Playbook Has Limits

 

Apple has built its entire product history on entering markets after everyone else and then reshaping them. It wasn’t first to market with tablets, smartwatches and wireless earbuds and it won all three categories decisively. The read on a delay like this is that Apple is simply doing what Apple does: taking the time to get it right rather than rushing a product that would embarrass the brand.

The foldable market is different in a way that makes that playbook harder to execute. Samsung, Huawei and several Chinese manufacturers have been iterating on foldable hardware for years. The category has had time to mature, and it still hasn’t broken through to mass adoption. Durability concerns, price points well above $1,500 and inconsistent app optimisation across the folded and unfolded states have all kept foldables as a premium niche rather than a mainstream category.

That’s actually the context that makes this harder for Apple’s delay. The market it’s entering late is one that nobody has fully cracked yet, including the manufacturers with years of head start. Apple’s delay doesn’t necessarily mean it’s behind – it might mean the underlying hardware and manufacturing technology simply isn’t ready for the kind of product Apple needs to ship.

 

 

Why The Delay Says Something About The Whole Category

 

The specific nature of the issues tells you something. Hinge mechanisms, display durability and manufacturing yield for foldable screens are legitimate engineering problems that haven’t been solved to consumer-grade standards even after years of commercial shipping.

The fact that Apple, with its engineering resources and supplier relationships, is still working through these issues is a signal that they’re structural challenges in the underlying technology, not just Apple-specific execution problems.

Analysts suggest that even if Apple manages a late 2026 launch, limited availability and pricing likely well above $2,000 will keep the device aspirational rather than mainstream. That creates a specific challenge: Apple’s most powerful market effects tend to happen when it ships something at sufficient volume to shift developer behaviour, accessory markets and consumer expectations simultaneously. A low-volume, ultra-premium foldable doesn’t do that.

The share price reaction, a four to five per cent drop despite strong underlying iPhone revenue, suggests investors are reading this the same way. They’re not worried about one quarter’s results. They’re pricing in the risk that a delayed foldable cedes ground in the premium segment to rivals at exactly the moment when the category might finally start gaining traction.

 

What App Developers And Startups Should Do While They Wait

 

For app developers and startups building for mobile, Apple’s delay is effectively a reprieve.

It buys more time to think seriously about what foldable-class screens actually change about user behaviour, rather than scrambling to adapt UIs to a rushed Apple-specific launch window.

The better question for founders isn’t ‘when will the foldable iPhone arrive?’ It’s ‘what problems does a larger, foldable screen actually solve for my users?’ The form factor creates clear opportunities in productivity, multitasking and content consumption, but the startups most likely to win in this space will be the ones that solve concrete user problems rather than the ones that simply adapt existing mobile apps to a bigger canvas.

Apple’s caution is also a useful signal for hardware startups and founders betting on the next form factor. If the world’s best-resourced consumer hardware company is still wrestling with the engineering fundamentals, the market may not be as close to inflection as the hype suggests.

The true edge in the next form factor cycle will likely come from solving user problems that current hardware can’t address, not from racing to be first on a platform that hasn’t fully arrived yet.

Apple will ship a foldable iPhone eventually. Whether it arrives in time to define the category or simply join it remains a question only the timeline can answer.

The post Apple Has Delayed Its Foldable iPhone Again, Is The Tech Giant Losing Its Edge In Hardware Innovation? appeared first on 91̽.

]]>
Before You Cut The Cord, What Are VoIP’s Limitations? /tech/before-you-cut-the-cord-what-are-voips-limitations/ Thu, 09 Apr 2026 08:17:27 +0000 /?p=148888 Lower bills, more flexibility, a host of features – VoIP has a lot going for it. But it’s not without...

The post Before You Cut The Cord, What Are VoIP’s Limitations? appeared first on 91̽.

]]>
Lower bills, more flexibility, a host of features – VoIP has a lot going for it. But it’s not without its quirks of course. VoIP – or Voice over Internet Protocol – has completely changed the way we think about phone calls. Put simply, VoIP enables you to make calls over the Internet instead of using traditional copper wiring.

It eliminates a lot of the costs involved with these traditional phone systems, including hardware and maintenance. It’s also much cheaper to make international calls – a big plus if you work remotely with global teams.

Businesses are saving a huge amount of money, remote teams are staying connected from wherever they are and features that once cost a fortune are now bundled in for free. It would be a no brainer to make the switch, right?

VoIP is excellent in the right set up, that much is certain. But it does come with limitations worth knowing about before you toss your old phone system in the trash.

 

 

Your Calls Are Only As Good As Your Internet

 

VoIP needs a stable Internet connection to work so if you don’t have that, you won’t be getting the full benefit out of your VoIP system. With traditional phone lines, the quality of your calls is largely dependent on what else is happening on your network. But with VoIP, it’s an entirely different story.

Whether your connection is slow, congested or unstable, you will feel it on every call. Either the audio can sound choppy or voices sound robotic and those maddening lag pauses where you and the other person accidentally talk over each other.

It’s frustrating at the best of times, especially if you are on a client call and trying to maintain some standard of professionalism.

 

Bandwidth Isn’t The Only Thing That Matters

 

It’s also a misconception that having fast Internet means that your VoIP system will work perfectly. Speed definitely helps, but the two main culprits that usually cause a problem are latency and jitter.

Latency is the delay in data travelling back and forth while jitter is the inconsistency in those delays that you would typically hear on the call.

Even if you have a blazing-fast connection, your call quality can still be terrible if your network is managed incorrectly. This is why most VoIP providers will recommend that you set up Quality of Service (QoS) on your router. Essentially, this tells your network to place voice traffic as a priority over everything else to keep your calls smooth.

 

 

Power Cuts Don’t Care About Your Meeting Schedule

 

This is something that most people don’t think about until it’s too late. VoIP phones need both electricity and the Internet to work whereas your copper-wire landline drew power directly from the phone line itself. Even if the lights went out, it would still work. VoIP doesn’t have that luxury.

If the power goes out, your VoIP system goes dark with it. For most households, it’s usually a minor inconvenience because you could use your phone’s hotspot to connect to the Internet. For businesses, it’s a much bigger problem, especially if you are based in an area prone to outages.

There are backup power solutions like UPS (Uninterruptible Power Supply) units, but they do add cost to your overall communication setup.

 

Emergency Calls Can Become Complicated

 

From a safety standpoint, this is one of those limitations that genuinely matter. When you dial an emergency number, traditional landlines automatically share your location with emergency services. VoIP systems can struggle with this.

Because VoIP is Internet-based, you can make calls from anywhere in the world with the same number. It’s wonderful for flexibility but it creates a headache for those emergency services trying to figure out where you are.

To get around this, it’s recommended to check what your specific VoIP provider offers and to make sure that your team knows the limitation especially if they travel for work often.

 

More Entry Points Means More Security Risk

 

Just like any other Internet-based app or system, VoIP is exposed to online cybersecurity risks. Things like call interceptions, toll fraud and phishing attacks can knock your phone system offline entirely.

Of course, none of this means that your system is inherently unsafe, it just means that security needs to be a priority. This is where using encryption, strong passwords and working with a reputable provider all go a long way.

With that being said, it does add another layer of responsibility which was non-existent with traditional phone lines.

 

Should You Still Switch To VoIP?

 

Almost certainly, yes. Especially if you do have a reliable Internet connection and a provider who knows what they’re doing. The cost savings, flexibility and features offered by VoIP are hard to argue with and the limitations are very manageable once you become aware of them.

The key thing is to go in with eyes open. VoIP is not a plug-and-play replacement for your old phone system. Yet, it rewards a little preparation regarding your network setup, backup plans, emergency call situations and security. Once you’ve covered that groundwork, VoIP is a fantastic tool to use.

As is the case with most technology, it’s not about it being perfect – it’s about it working for your situation and you knowing what you’re signing up for. For most people and businesses, it’s a worthwhile move.

The post Before You Cut The Cord, What Are VoIP’s Limitations? appeared first on 91̽.

]]>
What Is Space Mining? /tech/what-is-space-mining/ Wed, 08 Apr 2026 09:31:04 +0000 /?p=148839 If you’re anything like me, the term “space mining” probably makes you think of a blue-collar occupation straight out of...

The post What Is Space Mining? appeared first on 91̽.

]]>
If you’re anything like me, the term “space mining” probably makes you think of a blue-collar occupation straight out of a sci-fi movie. We’re talking aliens and astronauts wearing hard hats, pick axes in hand, hammering away at extra-terrestrial matter in zero gravity.

And, believe it or not, you actually wouldn’t be far from the truth. Space mining is basically what it sounds like: digging for treasure but not on Earth. We’re talking asteroids, the Moon and maybe even Mars one day.

Of course, the loot isn’t just plain old gold and platinum (although those actually are in space, but that’s a story for another day) – it’s also water, rare metals and other resources that could help humans live and work in space (according to NASA).

Instead of launching everything from Earth at jaw-dropping costs, the idea is that spacecrafts, mostly robots at first, would hop over to these space rocks, pick out the good stuff and either use it in orbit or ship it straight back home. Think of it like turning the solar system into a giant warehouse for humanity’s next big adventure.

 

It’s More Than Just a Sci-Fi  Dream

 

Sure, asteroid gold rushes sound cool, but the real magic of space mining isn’t hauling treasure back to Earth like Long John Silver of “Tresure Island”. Rather, it’s building a space economy that can run itself.

Water can be turned into rocket fuel, metals can be used to build spacecrafts and ice can support astronauts on long missions. The less we have to lug from Earth, the easier (and cheaper) deep-space exploration becomes. And if we can cut costs, we can do more, explore more and learn more.

In short, space mining is about making space sustainable – in theory.

 

Why Are We Talking About Space Mining Now/

 

For a long time, space mining was mostly nerdy white papers and sci-fi novels – much like watches that could make phone calls and self-driving cars were once a figment of our overactive and overzealous imaginations.

But suddenly, it’s a whole lot more real.

 

Startups Are Jumping In With Two Feet

 

Space isn’t just NASA’s playground anymore. Private companies are building probes, mapping asteroids and even figuring out how to fuel satellites in orbit. It’s basically the startup version of “let’s iterate in space”.

 

Rockets Are Getting Cheap(er)

 

Let’s not kid ourselves, rockets aren’t cheap and they never will be. But, they certainly are cheaper than they used to be, and that’s a significant development.

Reusable rockets, tiny satellites, better engines – getting into orbit no longer costs a small fortune. According to NASA, that makes asteroid scouting, lunar prospecting and in-orbit refuelling more plausible than ever.

 

The Space Economy Is Booming

 

From satellites to lunar bases, demand for space-based resources is growing fast. Fuel depots, construction materials and life-support resources all need to exist somewhere other than Earth.

And space mining is one way to make that happen.

 

Startups Could Have a Slice of the Pie

 

Space mining isn’t just about rockets and robots – it’s where AI, robotics, materials science and venture capital collide. Companies could specialise in asteroid prospecting, orbital refuelling, lunar construction or even in-space manufacturing. Imagine an entire Silicon Valley… but in orbit. That’s quite something, isn’t it?

There’s also a strategic side to consider. Rare metals and water in space could change supply chains, reduce reliance on Earth and even shift geopolitical power.

Space mining may be in its early stages, but the excitement is real. If it works, it could make deep-space missions cheaper, help humans live beyond Earth, and create entirely new markets.

For startups and tech innovators, it’s a playground of infinite possibilities – quite literally. Mining asteroids might sound like science fiction, but the groundwork is already being laid for a future where humanity isn’t just visiting space. It’s thriving there.

The post What Is Space Mining? appeared first on 91̽.

]]>
AI Is Finally Solving The Online Returns Crisis, And It’s Worth Billions To Whoever Gets There First /tech/ai-is-finally-solving-the-online-returns-crisis-and-it-is-worth-billions-to-whoever-gets-there-first/ Wed, 08 Apr 2026 09:30:11 +0000 /?p=148816 Fashion e-commerce has a dirty secret: somewhere between a quarter and half of everything sold online comes straight back. Across...

The post AI Is Finally Solving The Online Returns Crisis, And It’s Worth Billions To Whoever Gets There First appeared first on 91̽.

]]>
Fashion e-commerce has a dirty secret: somewhere between a quarter and half of everything sold online comes straight back.

Across Europe, around 26% of all online apparel orders are sent back (shoes are even worse at 27%), and in markets like Germany and Switzerland the figures hit 44% and 45% respectively. Each returned package costs a retailer between €20 and €45 in transport, handling and restocking once all the costs are added up.

McKinsey estimates that up to 30% of fashion items bought online in Europe are returned, most of them because the shopper bought multiple sizes and kept only the best fit. The industry has tried charging for returns, limiting refund windows and improving size guides. None of it has moved the needle meaningfully.

A new wave of AI startups thinks it knows why: every previous fix tried to discourage returns rather than prevent them. If about 70% of fashion returns are driven by sizing mismatches, according to data cited by sizing AI companies, then the real solution is giving shoppers enough confidence before they buy that they don’t need to hedge with multiple orders. That’s the problem virtual try-on technology is now well equipped to address, and the commercial prize for getting it right is enormous.

The technology has been around in various forms for a while, but what’s changed is the quality. Earlier iterations were novelty features that showed a flat image of a garment overlaid on a photo. What’s emerging now incorporates fabric physics, body movement modelling and personalised size prediction, all tools that meaningfully close the gap between seeing something on a screen and knowing how it’ll actually fit.

 

Why This Isn’t The Same Try-On Tool You Ignored Five Years Ago

 

The virtual try-on category has split into two distinct approaches, and both are attracting serious investment.

The first is AI-powered sizing: tools that build a personal fit profile based on body measurements, past purchases or photographs and use that profile to predict which size will fit best for a specific garment from a specific brand. French startup Fringle takes this approach, matching a user’s body measurements with garment dimensions directly and claiming to cut returns for brands including Maje. Zalando, the German fashion giant, has rolled out a similar tool where customers take two photos in form-fitting clothes to generate a size profile for future purchases.

The second approach is visual: virtual fitting rooms that let shoppers see how a garment, pair of shoes or accessory looks on their own body or a realistic avatar. Platforms including Virton, WANNA and iAugment are positioning themselves as plug-in solutions for existing e-commerce sites across the UK and Europe. The more sophisticated versions now model how fabric drapes and moves, addressing one of the main complaints about earlier try-on tools, which showed how something looked but not how it behaved.

Early pilots and adopter cases suggest meaningful impact. AI-driven sizing and virtual try-on have cut fashion-related return rates by around 30 to 40% in some implementations, according to retail technology analysts. Retailers also report higher conversion rates and larger basket sizes when try-on tools are available, meaning the technology can function as both a cost reduction and a revenue driver.

For markets with already high baseline return rates, the margin improvement potential here is substantial.

 

The Free Returns Culture That’s Now A Trap

 

The UK sits in an interesting position: British shoppers are accustomed to generous return policies, with most major UK and European retailers offering free returns within 30 to 90 days.

That’s been a competitive necessity rather than a strategic choice, and it’s created a customer expectation that’s now very difficult to walk back. Charging for returns risks customer backlash, and tightening refund windows risks losing customers to competitors who haven’t. The only exit from that trap is removing the reason most customers return in the first place.

A 2026 survey of European retailers found that 36% see keeping pace with AI as a major challenge, with legacy system integration and skills gaps cited as the main obstacles, that hesitation is understandable but increasingly expensive. The retailers who move first on virtual try-on and AI sizing will build proprietary fit data on their customers that becomes harder to replicate over time. The ones that wait will face the same return rates they’ve always had, with the added pressure of competing against brands whose unit economics have become structurally better.

For e-commerce startups building in fashion and apparel, what that means in practice is more direct. A startup that embeds strong virtual try-on or AI sizing from the start builds a structural advantage in returns costs that a late adopter will struggle to close. It’s also a meaningful differentiator in fundraising conversations, since investors who understand retail unit economics know that a 30% reduction in return rates changes the entire P&L shape of a fashion e-commerce business.

 

The Race Nobody Has Won Yet

 

The commercial prize here is real, and it’s still wide open.

No single company has established itself as the dominant platform for virtual try-on or AI sizing at European scale. Zalando’s internal tool gives it an advantage within its own platform, but the wider market, covering hundreds of independent fashion brands, mid-size retailers and e-commerce platforms that don’t have the resources to build their own solution, is still there for the taking.

The startups most likely to win are the ones that solve the integration problem: making it easy for any retailer to add AI-powered fit tools without a complex implementation, and building enough proprietary data over time to make their size predictions meaningfully more accurate than anything a new entrant could replicate quickly. That’s a data flywheel problem as much as a technology problem, and the window for establishing that flywheel is narrowing.

Online retail returns are a multibillion-euro drag on an industry that can’t afford to keep absorbing it. The tools now exist to address the root cause directly, and what the market is waiting for is the company that makes them easy enough, accurate enough and affordable enough to become standard.

Whoever gets there first won’t just be solving a logistics problem – they’ll own a category.

The post AI Is Finally Solving The Online Returns Crisis, And It’s Worth Billions To Whoever Gets There First appeared first on 91̽.

]]>
What Happens When A Data Centre Reaches End Of Life? /tech/what-happens-data-centre-reaches-end-life/ Wed, 08 Apr 2026 09:05:09 +0000 /?p=149051 Data centres have a lifespan. The servers inside them have an even shorter one. As cloud adoption accelerates and hardware...

The post What Happens When A Data Centre Reaches End Of Life? appeared first on 91̽.

]]>
Data centres have a lifespan. The servers inside them have an even shorter one. As cloud adoption accelerates and hardware generations compress, the UK is facing a growing question that rarely makes the headlines: what happens to all this infrastructure when it is switched off?

The answer, for too many organisations, is not much. Equipment sits in powered-down racks for months or years. Eventually it gets written off, palletised, and sent to a recycler whose methods may or may not meet the security and environmental standards the situation demands.

For an industry built on precision, the end-of-life process is often remarkably haphazard.

 

The Scale Of Data Centre Decommissioning In The UK

 

The UK is Europe’s largest data centre market, with over 450 facilities and growing. Hyperscalers are building new capacity at pace, but older facilities are simultaneously reaching obsolescence. Corporate data centres, the in-house server rooms and colocation footprints that still underpin much of UK business IT are being consolidated, migrated to cloud, or shut down entirely.

Each decommissioning event generates a significant volume of hardware: servers, storage arrays, networking switches, UPS systems, cabling, and cooling infrastructure. A mid-sized project can involve hundreds of servers and thousands of individual drives, each one containing data that must be accounted for.

 

The Data Security Challenge

 

The primary risk in any data centre decommission is data. Every server, every storage array, every SAN; these are not just hardware assets, they are repositories of business-critical and often personal data. The obligation under GDPR does not end when the power cable is pulled.

Professional at data centre scale requires a systematic approach. Each drive must be individually identified, logged, wiped to NIST 800-88 standards or physically destroyed, and issued with a certificate of destruction that ties back to a specific asset and serial number. For organisations in regulated sectors such as finance, healthcare, defence, the audit trail is not optional. It is the difference between compliance and a reportable breach.

The complexity increases with hybrid environments. A single data centre may contain drives encrypted with different key management systems, drives with firmware-level issues that prevent software wiping, and legacy hardware running operating systems that modern erasure tools do not support. A credible decommissioning partner needs to handle all of these scenarios, not just the straightforward ones.

 

The Circular Economy Opportunity

 

Data centre hardware retains significant value at end of life. Enterprise servers that cost tens of thousands of pounds new can still command meaningful prices on the secondary market. RAM, processors, GPUs, and NVMe drives are all individually saleable components. Even chassis and power supplies have recycling value.

The organisations that treat decommissioning as a disposal problem rather than a recovery opportunity are leaving substantial sums on the table. A well-managed decommissioning process recovers value from every viable component, offsets the cost of the project itself, and in many cases generates a net positive return.

This is not theoretical. The global market for refurbished IT hardware is projected to exceed 200 billion dollars by 2030, driven by sustainability mandates, budget constraints, and the simple reality that enterprise-grade equipment often has useful life well beyond its first deployment.

What a Proper Decommissioning Process Looks Like

 

A structured data centre decommission follows a clear sequence. It begins with a full asset audit, documenting every device, its serial number, its location, and its data classification. This audit forms the basis for the chain of custody that will follow.

Hardware is then removed in a controlled sequence, typically starting with drives and data-bearing components. These are processed on-site or transported under secure chain of custody to a certified processing facility. Data destruction is completed and verified before any device enters the remarketing or recycling stream.

Non-data-bearing infrastructure: racks, cabling, cooling units, UPS batteries is catalogued and either resold, recycled, or disposed of through appropriate waste streams. The entire process is documented in a decommissioning report that provides the audit trail regulators and insurers expect.

 

The Regulatory Direction

 

UK regulations around e-waste and data security are tightening, not loosening. The WEEE Regulations already place obligations on businesses to ensure electronic waste is processed through authorised facilities. GDPR enforcement continues to sharpen, with the ICO increasingly scrutinising end-of-life data handling as part of broader compliance investigations.

For data centre operators and the businesses they serve, the direction of travel is clear. Decommissioning is becoming a compliance event, not a facilities management task. The organisations that invest in doing it properly, with certified partners, documented processes and full chain of custody are the ones that will avoid the regulatory and reputational risks that come with getting it wrong.

The post What Happens When A Data Centre Reaches End Of Life? appeared first on 91̽.

]]>