Do People Trust AI More Than They Trust Humans?

Would people would take a worse deal from a machine than a person? Well, a group of researchers at UCD Michael Smurfit Graduate Business School put this to the test, and the answer is yes more often than expected. Their work looks at how people behave when they think they are dealing with AI instead of another human.

Dr Suhas Vijayakumar, Dr Yuna Yang, and Dr David DeFranza found that people tend to see AI as guided by reason, while humans are seen as guided by emotion.

That idea changes how people react in real situations, especially when money is involved and a decision has to be made on the spot.

 

Would 拢0.10 Feel Different Depending On Who Is Offering It?

 

Participants took part in an economic game with real payouts. Each person received an offer: take $0.10 or reject it and walk away with $0.

In other words: taking the money means a gain and turning it down means nothing.

When the offer came from a supposed AI, 49.2% accepted it. When the exact same offer came from a human, only 37.5% said yes.

That gap is what makes the study interesting because nothing changed in the deal itself and nly the identity of the proposer changed.

Many people rejected the human offer, even though it left them empty-handed.

 

Are People Judging Fairness Differently?

 

A low offer from a person can feel personal. It can read as unfair, even insulting. That reaction often leads people to turn it down.

The response softens when the same offer comes from AI. It feels less like an insult and more like a calculation.

 

 

鈥淲e speculate perhaps a reason why people are less likely to accept a similar unfair offer from a person (human), could also be because of expectations of reciprocity and emotional fairness that we share with other human beings. Future research needs to look at further expectations and beliefs about AI鈥, says Dr Vijayakumar.

The research shows that people adjust their own thinking in response. If the other side seems logical, they act in a more calculated way too.

 

Could This Affect Everyday Decision?

 

The study looks at it in everyday context as well. AI already appears in customer service chats, pricing systems, and public sector tools.

The research says 60% of governments using AI want it to play a role in real-time decision-making.

That is important to note, because people may treat advice from AI differently. A suggestion from a machine could feel more objective, even when it is not.

The findings also show that emotion does not disappear after plenty of participants rejected the unfair deal in both cases, which shows that people do not turn into calculators overnight.

What鈥檚 particularly interesting here is how quickly behaviour changes. A label saying 鈥淎I鈥 is enough to nudge people toward a more calculated choice, even when the deal stays exactly the same.

The researchers say people should be aware of how their expectations influence their choices. Thinking that AI is purely rational can change how decisions are made, even in small everyday situations.