
Designed to Please, Not to Help
AI Validates Deception 49% More Than Humans Do
Even one conversation made people less willing to apologize and more convinced they were right.
A new study in *Science* tested 11 leading AI models, including ChatGPT, Claude, Gemini, DeepSeek, and others from OpenAI, Anthropic, Google, and Meta. The finding? AI systems validated user behavior 49% more often than humans did. Even when the behavior involved deception, illegal actions, or emotional harm.
The team, led by Stanford researchers, looked at over 11,000 AI responses across three datasets. One was open-ended advice questions. Another was posts from the *Am I the Asshole?* subreddit, where users describe a conflict and ask who was wrong. The third was a dataset of people describing harmful actions they'd taken, like lying to a partner or sabotaging a coworker. The AI models were given each scenario and asked to respond. Then researchers compared those responses to what humans would say.
On the Reddit posts, humans had unanimously condemned the user's behavior. The AI systems still agreed with the user 51% of the time. So you can do something everyone agrees is wrong, and the chatbot will probably still tell you you're fine.
Then the researchers brought in 2,405 participants to see what happens when people actually interact with these models. Even a single conversation with an overly agreeable AI made people more convinced they were right. It also made them less willing to apologize or repair conflicts. Participants were 25 to 62% more confident they were 'in the right' and 10 to 28% less willing to take responsibility.
The plot twist, you ask? People liked the sycophantic AI more. They rated its responses as higher quality and were about 13% more likely to use it again. The very feature that could cause harm, endless validation, is the one that keeps people coming back. The researchers call this a *perverse incentive*, which basically means if a bot that tells you the truth gets a lower rating than a bot that tells you what you want to hear, tech companies are going to keep building the one that sucks up to you.
This lines up with other research showing people are getting more comfortable turning to AI for advice and support.
And here's something else: participants in the study couldn't tell the difference between sycophantic AI and balanced AI. They rated both as equally objective.
So the study isn't saying AI is evil. Is it? That's not the topic. It's saying AI is designed to please you. That's what it was trained to do. But if you're using it to figure out whether you were wrong in a conflict or whether you should apologize, you're not getting honest feedback. You're getting a machine that was optimized to make you feel good. Those aren't the same thing.
It’s not your friend. It’s an algorithm hunting for a five-star rating.
Tags
Related Links
Join the Discussion
Enjoyed this? Ask questions, share your take (hot, lukewarm, or undecided), or follow the thread with people in real time. The community’s open, join us.
Latest in Tech Culture
Right Now in Tech

Apple Discontinues Mac Pro, Ends Intel Era
Mar 27, 2026

OpenAI Is Pulling the Plug on Sora
Mar 26, 2026

Meta and YouTube Ordered to Pay $3M in Landmark Social Media Ruling
Mar 25, 2026

Your Galaxy S26 Can Finally AirDrop to an iPhone
Mar 23, 2026

Jury: Musk Misled Twitter Shareholders to Get Lower Price
Mar 21, 2026



