When AI became mainstream, a lot of companies hoped (and were promised that) it would help with easing staff off their workloads. The small “tedious” day to day tasks like drafting docs, summarising info, even debugging code were some ways AI promised to help. It was an appealing promise, for sure.
Harvard researchers decided to test that promise in practice. In an 8 month study at a US technology company with about 200 employees, they tracked how generative AI changed day to day work. The team observed staff in person 2 days a week, monitored internal communication channels and carried out more than 40 in depth interviews across engineering, product, design, research and operations.
The company did not force anyone to use AI, although it paid for enterprise subscriptions to commercial tools. Even so, workers began to change how they worked. According to the researchers, AI tools did not, in fact, take down workloads. If anything, they actually intensified them. Employees worked at a faster pace, took on a lot more kinds of tasks and extended work into more hours of the day. Many did this without being asked.
Workers said AI made 鈥渄oing more鈥 feel possible and accessible. Many described 鈥渏ust trying things鈥 with the tools. Over time, those small experiments added up. Employees absorbed work that might previously have justified extra hiring or outside support.
How Did AI Intensify Day To Day Work?
Harvard identified three main patterns. The first was task expansion. Because AI can fill gaps in knowledge, employees stepped into responsibilities that once belonged to colleagues. Product managers and designers began writing code. Researchers took on engineering tasks. People attempted work they might previously have deferred.
This expansion created extra demands elsewhere. Engineers spent more time reviewing and correcting AI assisted work. Oversight did not happen only in formal code reviews. It also appeared in Slack threads and quick desk side consultations. That guidance added to engineers鈥 workloads.
The second pattern involved unclear boundaries between work and non work. AI made it easier to begin tasks. Workers prompted tools during lunch, in meetings or while waiting for files to load. Many sent a 鈥渜uick last prompt鈥 before leaving their desk. Each action felt minor, but together they reduced natural breaks in the day and increased continuous engagement with work.
The third pattern was heavier multitasking. Employees managed multiple active threads at once, writing code while AI generated alternatives or running agents in the background. This created frequent attention switching and a growing list of open tasks. Staff felt productive, but also under greater pressure.
As one engineer told the researchers, 鈥淵ou had thought that maybe, oh, because you could be more productive with AI, then you save some time, you can work less. But then really, you don鈥檛 work less. You just work the same amount or even more.鈥
More from Artificial Intelligence
- OpenAI Puts Stargate UK Data Centre Project On Pause 鈥 But Why?
- Should Governments Have The Power To Switch Off The World’s Most Powerful AI Models?
- Taiwan’s TSMC Profits Set To Surpass 50% Thanks To AI Chip Demand
- Google And Intel Deepen AI Chip Ties, Indicating That AI Isn’t Just About GPUs Anymore
- The ICO Just Weighed In On AI Agents And Data Protection, Here Is What UK Startups Need To Know
- Sam Altman鈥檚 Robot Tax Plans: What Does It Actually Mean And Who Would It Affect?
- In The AI Age, Do You Still Need To Spend Money On Expensive Phone Cameras?
- Meet Muse Spark, Meta’s AI That Knows You Better Than You Know Yourself
What Is The AI Verification Tax?
Concerns about rising workload are echoed in the corporate world. Greg Hanson, Group Vice President and Head of EMEA North at Informatica from Salesforce, described what he calls the AI Verification Tax.
He said: 鈥淎I intensifies workloads when companies fall foul to the AI Verification Tax. If AI can鈥檛 be trusted to work unsupervised, the productivity promise collapses and instead adds time to the task. This leaves employees spending more time checking and correcting AI outputs as they would if they were doing the task themselves.
鈥淭his verification burden is compounded by a skills gap. 75% of data leaders tell us their workforce lacks data literacy, and 74% say more AI literacy training is needed to use AI responsibly. But it isn鈥檛 inevitable. Where data is well governed and employees have the skills to challenge AI outputs, verification drops, decisions scale more safely, and productivity gains become real rather than theoretical.鈥
Harvard鈥檚 findings support that concern. Early productivity gains can hide growing cognitive strain as employees juggle more AI enabled workflows. Over time, this can impair judgement, increase errors and make it harder to step away from work.
How Should Organisations Respond?
The researchers propose building what they call an 鈥淎I practice鈥. This means setting norms around how AI is used, when work should pause and how far job scope should stretch.
They recommend intentional pauses before major decisions, sequencing work to avoid constant interruption and protecting time for human connection. Short check ins and structured dialogue can counter the isolating effect of fast AI mediated work.