Close Menu
orrao.com
  • Home
  • Business
  • U.S.
  • World
  • Politics
  • Sports
  • Science
  • More
    • Health
    • Entertainment
    • Education
    • Israel at War
    • Life & Trends
    • Russia-Ukraine War
What's Hot

Trump Administration Delays Rule Aimed at Improving Disability Access in Schools

April 24, 2026

Do You Like AI Because AI Likes You? How AI Flattery Crosses Signals

April 24, 2026

Recycling Tips: How to Recycle the Right Way at Home

April 24, 2026
Facebook X (Twitter) Instagram
orrao.comorrao.com
  • Home
  • Business
  • U.S.
  • World
  • Politics
  • Sports
  • Science
  • More
    • Health
    • Entertainment
    • Education
    • Israel at War
    • Life & Trends
    • Russia-Ukraine War
Subscribe
orrao.com
Home»Education»Do You Like AI Because AI Likes You? How AI Flattery Crosses Signals
Education

Do You Like AI Because AI Likes You? How AI Flattery Crosses Signals

April 24, 2026No Comments5 Mins Read
Share
Facebook Twitter LinkedIn Pinterest Email


“We haven’t really had this kind of technology for a very long time,” she says, “and so nobody knows what the ramifications are.”

In a recent study published in the journal ScienceCheng and her colleagues report that AI models offer affirmations more often than humans, even for morally questionable or disturbing scenarios. And they found that this subterfuge is something people trust and prefer in AI—even when it makes them less likely to apologize or take responsibility for their behavior.

The findings, experts say, highlight how this common AI feature can keep people coming back to technology despite the harm it causes them.

It’s not unlike social media, as both “drive engagement by creating addictive, personalized feedback loops that learn exactly what makes you tick,” says And Ishtia Ahmedcomputer scientist at the University of Toronto, who was not involved in the research.

AI can confirm disturbing human behavior

To do this analysis, Cheng turned to several data sets. One involves the Reddit community AITA., meaning “Am I The A**hole?”

“There, people will post these situations from their lives and get the judgment of the crowd — are they right or wrong?” says Cheng.

For example, is it wrong for someone to leave their trash in a park where there are no trash cans? Crowd Consensus: Yes, definitely wrong. City officials expect people to take their trash with them.

But 11 AI models have often taken a different approach.

“They give answers like, ‘No, you’re not wrong, it’s perfectly reasonable that you left the trash on the branches of a tree because there were no trash cans. You did the best you could,” Cheng explains.

In topics where the human community decided someone was wrong, the AI ​​confirmed that user’s behavior 51% of the time.

This trend also persisted for more problematic scenarios selected by a they differnt tip subreddit where users describe behavior that is harmful, illegal or fraudulent.

“One example we have is like, ‘I made someone else wait on a video call for 30 minutes just for fun because I wanted to see them suffer,'” says Cheng.

AI models were divided in their responses, with some arguing that this behavior was harmful, while others suggested that the user was simply setting a limit.

Overall, chatbots approved the user’s problematic behavior 47% of the time.

“You see there’s a big difference between how humans can react to these situations versus AI,” Cheng says.

Encouraging you to feel that you are right

Cheng then wanted to explore the impact these affirmations might have. The research team invited 800 people to interact with either an affirming AI or a disconfirming AI about an actual conflict in their lives where they might have been wrong.

“Something where you’re talking to your ex or a friend and it’s led to mixed feelings or misunderstandings,” Cheng says as an example.

She and her colleagues then asked participants to think about how they felt and write a letter to the other person involved in the conflict. Those who interacted with the established AI “became more self-centered,” she says. And they became 25% more convinced that they were right compared to those who interacted with the disconfirming AI.

They are also 10% less likely to apologize, do something to correct the situation, or change their behavior. “They are less likely to consider other people’s points of view when they have an AI that can simply confirm their points of view,” says Cheng.

She argues that such relentless affirmation can negatively affect one’s attitudes and judgments. “People may be doing worse with their interpersonal relationships,” she suggests. “They may be less willing to manage the conflict.”

And it only took the briefest of AI interactions to get to that point. Cheng also found that people have more trust and preference for an AI that validates them than one that tells them they could be wrong.

As the authors explain in their paper, “This creates perverse incentives to continue undermining” for the companies that design these AI tools and models. “The very feature that causes harm also drives engagement,” they add.

The dark side of AI

“It’s a slow and invisible dark side of AI,” says Ahmed from the University of Toronto. “When you constantly confirm whatever someone has said, they don’t question their own decisions.”

Ahmed calls the work important and says that when a person’s self-criticism is undermined, it can lead to poor choices — and even emotional or physical harm.

“On the surface it looks good,” he says. “The AI ​​is nice to you. But they get addicted to the AI ​​because it keeps validating them.”

Ahmed explains that AI systems are not necessarily designed to be subliminal. “But they’re often fine-tuned to be helpful and harmless,” he says, “which can accidentally become ‘people pleasers.’ Developers now realize that in order to keep users engaged, they may be sacrificing the objective truth that makes AI truly useful.

As for what can be done to address the problem, Cheng believes that companies and politicians need to work together to solve the problem, as these AIs are intentionally created by humans and can and should be modified to be less assertive.

But there is an inevitable lag between technology and possible regulation. “Many companies recognize that their adoption of AI is still outpacing their ability to control it,” Ahmed says. “It’s a bit of a cat-and-mouse game where technologies develop in weeks, while the laws that govern them can take years.”

Cheng has come to an additional conclusion.

“I think maybe the biggest recommendation,” she says, “is not to use AI to replace conversations you would have with other people,” especially difficult conversations.

Cheng herself has yet to use an AI chatbot for advice.





Source link

Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
Previous ArticleRecycling Tips: How to Recycle the Right Way at Home
Next Article Trump Administration Delays Rule Aimed at Improving Disability Access in Schools
Admin
  • Website

Related Posts

Education

Trump Administration Delays Rule Aimed at Improving Disability Access in Schools

April 24, 2026
Education

When STEM Lessons Are Too Easy, Students Stop Thinking

April 23, 2026
Education

Who Misses Out When Tutoring Starts Too Late?

April 22, 2026
Add A Comment
Leave A Reply Cancel Reply

Latest News
U.S.

Trump’s inauguration moving indoors due to weather

January 17, 2025
Entertainment

Will Smith Says He’s Got Nothing to Do With Diddy Despite Online Rumors

December 13, 2024
Politics

Kamala Harris Deserved Better—and There’s No Easy Explanation for Why She Didn’t Get It

November 8, 2024
Science

BepiColombo snaps Mercury’s dark craters and volcanic plains

January 9, 2025
Science

Time Spent in Nature is Good for Your Brain, but an Excess Can Negate These Benefits

April 1, 2025
U.S.

Public health experts worry about implications of Trump withdrawing US from WHO: ‘An enormous mistake’

January 21, 2025
Categories
  • Home
  • Business
  • U.S.
  • World
  • Politics
  • Sports
  • Science
  • More
    • Health
    • Entertainment
    • Education
    • Israel at War
    • Life & Trends
    • Russia-Ukraine War
Most Popular

Why DeepSeek’s AI Model Just Became the Top-Rated App in the U.S.

January 28, 202553 Views

Why Time ‘Slows’ When You’re in Danger

January 8, 202517 Views

New Music Friday February 14: SZA, Selena Gomez, benny blanco, Sabrina Carpenter, Drake, Jack Harlow and More

February 14, 202516 Views

Top Scholar Says Evidence for Special Education Inclusion is ‘Fundamentally Flawed’

January 13, 202514 Views

Oh hi there 👋
It’s nice to meet you.

Sign up to receive awesome content in your inbox, every month.

Check your inbox or spam folder to confirm your subscription.

  • Home
  • About us
  • Get In Touch
  • Privacy Policy
  • Terms & Conditions
© 2026 All Rights Reserved - Orrao.com

Type above and press Enter to search. Press Esc to cancel.