ChatGPT Has Been Flattering You And It Could Be Costing Your Business

A digital hand reaching toward a human hand, illustrating the tension between artificial intelligence and human connection in ChatGPT's overly humanised tone.

At some point in the last year or so, ChatGPT started to feel a bit like that one friend who agrees with everything you say. Great question! Brilliant idea! You鈥檝e really identified the core issue here!

You know that something isn鈥檛 quite right. You just can鈥檛 quite put your finger on it.

Well, OpenAI has now officially put its finger on it. In mid-March 2026, the company released an update to GPT-5.3 Uniquely designed to reduce what it called 鈥渢easer-style phrasing鈥 鈥 the open loops, cliffhanger hooks and high-strung build-ups that had seeped into ChatGPT鈥檚 responses. Cases cited in the release notes included phrases like 鈥淚f you want鈥︹, 鈥淵ou鈥檒l never believe鈥︹ and 鈥淚 can tell you these three things that鈥︹. If those sound familiar, that鈥檚 because you鈥檝e definitely been on the receiving end of these ChatGPT responses recently.

Sam Altman, to his credit, was surprisingly candid about it. When a user noted that ChatGPT鈥檚 鈥渕ost distinguishing characteristic is its humanity鈥, Altman agreed and framed it as a problem, not a compliment. The implication being that an AI known primarily for its warmth and personality rather than its accuracy and directness might be heading off course.

 

What Changed With ChatGPT 鈥 And Why?

 

Somewhere in the training process, recent versions of ChatGPT were optimised in ways that rewarded engagement over honesty. Praise the user, build anticipation, mirror their enthusiasm back at them. It works brilliantly if your goal is for people to keep chatting, less so if your goal is a straight answer.

This isn鈥檛 a conspiracy. It鈥檚 a known risk in how large language models get tuned. When human feedback shapes model behaviour, which it does, models learn that flattery and dramatic framing tend to land well so they do more of it. The result, over time, is an AI that has quietly shifted from 鈥渦seful tool鈥 to 鈥渧ery agreeable pal.鈥

The broader conversation about whether AI tools are actually serving users or just keeping them engaged has been building for a while. This update is OpenAI acknowledging, in release-note language, that the criticism had merit, indeed.

 

Why This Actually Matters For Your Business

 

Here鈥檚 where it stops being funny and starts being worth taking seriously.

If you鈥檙e a founder or a business owner using ChatGPT for anything that involves judgement 鈥 strategy, copywriting, product decisions, customer messaging 鈥 you鈥檝e been getting responses from a model that was, at least partially, optimised to make you feel good about your ideas rather than truly pressure-test them.

You ask ChatGPT whether your new product positioning works. It tells you it鈥檚 compelling, insightful and really speaks to your target audience. Is that because it鈥檚 true? Or because the model has learned that enthusiastic validation gets a better response than 鈥渁ctually, your third paragraph is confusing and your call to action is buried鈥?

Long-time users have noted that recent versions spend more time praising their questions than challenging them. That鈥檚 the kind of thing that feels pleasant in the moment and quietly reinforces confirmation bias over time.

For startups making fast, high-stakes decisions on limited information, confirmation bias is a genuine risk.

 

The 鈥淨uit-GPT鈥 Crowd Had A Point

 

It would be easy to dismiss the vocal backlash on social media 鈥 the 鈥淨uit-GPT鈥 communities, the threads about ChatGPT becoming 鈥渢oo human鈥 鈥 as the usual internet noise. But the fact that OpenAI responded with an actual product update suggests it wasn’t just noise.

There鈥檚 a reason AI slop has become a real concern across industries that rely on content quality. When AI tools optimise for engagement rather than accuracy, the output gets softer and more eager to please. That鈥檚 fine for generating a birthday message, but not so much for a competitive analysis or a risk assessment.

The irony, of course, is that other AI writing tools have faced their own credibility questions in recent years. The pattern is consistent: when the model鈥檚 job is to keep you happy, the model keeps you happy. Whether that鈥檚 actually useful is a separate debate.

 

What To Do About It

 

The bottom line is fairly simple, even if it requires a slight shift in how you use these tools.

Stop asking ChatGPT whether your idea is good, ask it to find the holes in your idea. Prompt it to steelman the opposition, identify weaknesses, play devil鈥檚 advocate. A model that鈥檚 been tuned toward agreeableness will still push back if you explicitly ask it to 鈥 it just won鈥檛 volunteer the criticism if you don鈥 t prompt it to.

And be conscious of the flattery. When ChatGPT opens with 鈥渨hat a great question鈥 or tells you your draft is 鈥渞eally compelling鈥, that鈥檚 not editorial feedback. That鈥檚 tone-setting. The actual content that follows is usually more useful than the preamble suggests, but only if you鈥檙e reading past its opening remarks.

OpenAI has made a start. The responsible development of AI tools depends on exactly this kind of honest self-correction. But the update won鈥檛 solve everything overnight, and the underlying incentives that created the problem haven鈥檛 disappeared.

The most useful thing an AI tool can do for your business is tell you something you didn鈥檛 already want to hear. Worth keeping that in mind next time it tells you your strategy is absolutely brilliant.