A new study from the University of Cologne found that people are 15% more likely to lie when AI encourages them to do so.
The research looked at how people respond when AI gives them advice. Participants were asked to roll a die and report the result. They could earn more money by lying about their score.
Those who got dishonest advice from an AI were far more likely to cheat than those who got no advice. When the AI encouraged honesty, people would mainly ignore it. Basically, dishonest advice had power, but honest advice did not.
Professor Irlenbusch said the research shows how easily people can be pushed into unethical behaviour when a machine gets involved. 鈥淲hen AI encourages lying, people cheat more,鈥 he said.
Does Knowing It鈥檚 AI Make A Difference?
The researchers tested whether people behaved differently when they knew the advice came from a machine. The answer was no. In fact, knowing the suggestion came from an algorithm sometimes made people feel less guilty.
This idea, known as algorithmic transparency, was supposed to make people more careful. Instead, it might have had the opposite effect. When advice feels impersonal, coming from a programme rather than a person, people seem more willing to cross ethical lines.
The study also found that human advisors could encourage dishonesty too, but AI made the issue more serious because of its reach. It can send dishonest advice instantly to thousands of people and repeat it endlessly without getting tired or questioning its message. That speed and reach make it far harder to control.
More from News
- World Quantum Day 2026: Experts Reflect On Industry Developments This Year
- 79% Of UK Workers Fear Losing Their Jobs This Year – And Its Not AI Related
- Scail Launches To Help Regulated SaaS Businesses Navigate The AI 鈥淧erfect Storm鈥
- X Is Taking A Slightly Different Approach To Managing Click Bait Content – Will It Work?
- AI Is Meant To Reduce Workloads, Why Is It Still Causing Workers Cognitive Fatigue?
- Apple Wins Q1 As Smartphones Shipments Go Up And Competitor Sales Go Down
- Can Travellers Expect Lower Flight Prices After The Ceasefire?
- Gen Z Consumers Face The Highest Online Fraud Risks – How Are They Staying Protected?
Why Does AI Make Cheating Easier?
AI systems don鈥檛 have emotions or a sense of guilt, which makes their advice feel neutral. When a computer says 鈥測ou could lie to earn more,鈥 it doesn鈥檛 sound malicious… it just sounds practical. That tone may make people feel that twisting the truth is less wrong.
The researchers said this effect could show up in daily life, especially as more apps and services use AI. From online games and financial advice tools to workplace chatbots, AI is everywhere. If it gives advice that rewards dishonesty, the ripple effect could be quite big.
The study鈥檚 authors discussed how moral shortcuts encouraged by machines can quietly spread. A person who feels fine about lying once may find it easier to do it again. Over time, this can change social habits in ways that are hard to trace back to a single cause.
What Can Be Done About It?
Professor Irlenbusch and his team said more work is needed to understand how AI influences moral decision-making. They believe policymakers and researchers should develop ways to limit how AI systems influence honesty.
Transparency alone won鈥檛 fix the problem. The study found that even when people knew the source of advice, their behaviour didn鈥檛 improve. This means designers of AI tools need to build ethical safeguards directly into how these systems give recommendations.
Accountability is another issue. When an AI system encourages dishonesty, who is responsible? The developer, the user… or the company that released it? These questions are growing more urgent as AI tools become part of everyday life.
Can Humans Stay Honest In An AI World?
The study, published in the Economic Journal under the title The Corruptive Force of Artificial Intelligence Advice on Honesty, makes one thing clear and that is, technology can influence moral choices just as much as it can improve efficiency.
AI can persuade, flatter and even corrupt and it can do it faster than any human. The research team鈥檚 message is simple: honesty should not be left to chance when machines start giving advice.
AI should be used to make decisions easier, but as this study shows, that convenience comes with consequences. The real problem is not so much just how machines can think, rather it is whether humans can stay truthful when they do.
This research was led by Professor Bernd Irlenbusch and Professor Dr Nils K枚bis from the University of Duisburg-Essen as well as Professor Rainer Michael Rilke from WHU 鈥 Otto Beisheim School of Management.