According to Hiscox, 53% of applicants who looked for work during the past year let an AI builder draft at least part of their CV. The tools fire out tailored phrases, arrange keywords and sort layout in minutes. That speed tempts candidates in every sector, from catering to coding, to swap the blank page for a prompt.
Automation keeps working after the document is saved. Hiscox records that 45% called on software during online tasks or assessments, and 29% leaned on it in remote interviews. The insurer adds that 59% view this help as fair play, although 41% think it puts rivals at a disadvantage and 42% feel it misleads hiring teams. The numbers show a workplace where digital know/how sits side by side with ethics anxiety.
Recruiters notice that CV created with the help of chatbots share a specific kind of tone and way too perfect grammar. Employers can start to treat plain wording as a warning sign, pushing applicants toward AI-assisted prose regardless of personal preference. In turn, job hunters feel forced to learn prompt craft even when they would rather write alone, creating a cycle that pulls more people into the machine-written style.
Does AI Create Exaggerations On CVs?
The report notes that 38% confess to lying on their CV. Among that group, 53% inflate work history, 41% list hobbies they do not practise, 33% overstate skill levels and 14% even invent referees. The ease of cutting and pasting text blurs the line between polish and fiction.
Chatbots make that easier by filling gaps with confident language. Hiscox says 37% would leave an exaggeration generated by software untouched. A polished paragraph can probably make it past screening but then have nothing to stand on when the candidate must prove the claim in person. Recruiters speak of applicants who ace written rounds then stumble when asked for real life examples.
This bluff damages trust at the gateway to employment. A CV that reads like poetry can hide certain flaws and gaps, turning early selection into a challenging process. The longer such fabrications travel through the hiring process, the higher the cost in wasted time and lost staff morale.
To illustrate, Hiscox launched an advert called 鈥淭he Perfect Candidate鈥. The fictional CV shines at first glance, then tiny print shows sham degrees and inflated metrics written with a bot. The punchline lands when the viewer realises every detail came from an AI generator by showing exactly how sleek language could possibly disguise fraud.
More from News
- World Quantum Day 2026: Experts Reflect On Industry Developments This Year
- 79% Of UK Workers Fear Losing Their Jobs This Year – And Its Not AI Related
- Scail Launches To Help Regulated SaaS Businesses Navigate The AI 鈥淧erfect Storm鈥
- X Is Taking A Slightly Different Approach To Managing Click Bait Content – Will It Work?
- AI Is Meant To Reduce Workloads, Why Is It Still Causing Workers Cognitive Fatigue?
- Apple Wins Q1 As Smartphones Shipments Go Up And Competitor Sales Go Down
- Can Travellers Expect Lower Flight Prices After The Ceasefire?
- Gen Z Consumers Face The Highest Online Fraud Risks – How Are They Staying Protected?
What Rules Has The UK Government Set For AI And CVs?
A government guide on generative AI tells applicants to tap tools as helpers, not ghost-writers. It welcomes chatbots for career ideas, company research and grammar checks, but warns against pasting entire statements straight into an application.
The guide draws a line at exams and interviews. Using AI during live or pre-recorded assessments breaks the code because answers must show each person鈥檚 own knowledge. Preparing with practice questions is fine, but feeding real test items into a bot is not.
Experts advise candidates never to paste non-public company data into a public model, since that text could be stored or shared elsewhere. Authenticity and confidentiality go together in the handbook.
鈥淎I can help many candidates put their best foot forward. Using tools to sharpen language, tailor experience and improve presentation helps candidates level up their application, but it needs to be used carefully and in the right parts of the process,鈥 says Pete Treloar, Chief Underwriting Officer at Hiscox UK.
Treloar added, 鈥淲hile it’s easy to understand why candidates use AI to enhance their chances of success, when it鈥檚 not used well it can hinder an application.
鈥淟arge language models for example, can produce content that appears generic and impersonal, and responses that don鈥檛 truly reflect a candidate鈥檚 skills, experience and suitability for a role.
鈥淔or recruiters and hiring managers, who find themselves unable to accurately judge a candidate鈥檚 ability it鈥檚 particularly problematic.
鈥淚f they鈥檙e passing on incorrect information or poor recommendations to their clients, not only does it damage relationships, but it can lead to more serious claims and that鈥檚 why insurance is so important.鈥