Close Menu
orrao.com
  • Home
  • Business
  • U.S.
  • World
  • Politics
  • Sports
  • Science
  • More
    • Health
    • Entertainment
    • Education
    • Israel at War
    • Life & Trends
    • Russia-Ukraine War
What's Hot

What We Know About Elon Musk’s Texas Lobbyists and Their Influence — ProPublica

July 3, 2025

How Dead Hangs Build Stronger Tendons and Unlock Hidden Strength

July 3, 2025

New Yorkers Aren’t Afraid of a Democratic Socialist Mayor

July 2, 2025
Facebook X (Twitter) Instagram
orrao.comorrao.com
  • Home
  • Business
  • U.S.
  • World
  • Politics
  • Sports
  • Science
  • More
    • Health
    • Entertainment
    • Education
    • Israel at War
    • Life & Trends
    • Russia-Ukraine War
Subscribe
orrao.com
Home»Science»Please Don’t Take Moral Advice from ChatGPT
Science

Please Don’t Take Moral Advice from ChatGPT

December 18, 2024No Comments6 Mins Read
Share
Facebook Twitter LinkedIn Pinterest Email


Should I tell my friend that her boyfriend is cheating on her? Should I intervene when I hear an off-color joke?

When faced with moral questions (in situations where the course of action is related to a sense of right and wrong) we often seek advice. And now people can turn to ChatGPT and other large language models (LLM) for guidance.

Many people seem happy with the answers these models provide. In a preprinted study, people overrated the responses LLMs produced when presented with moral dilemmas. reliable, trustworthy and even responsible than those of New York Times Ethics columnist Kwame Anthony Appiah.


About supporting science journalism

If you like this article, please consider supporting our award-winning journalism subscribe. By purchasing a subscription, you’re helping to ensure a future of impactful stories about the discoveries and ideas that shape our world.


That study joins several others that suggest LLMs can offer strong moral advice. Another published last April found that people rated an AI’s reasoning the “higher” of man. virtuous, intelligent and reliable. Some researchers have also suggested that LLMs can be trained to provide ethical financial guidance “despite”inherently sociopathic“.

These findings mean that virtuosic ethical advice is within our grasp; so why not ask an LLM? But this takeover has several questionable assumptions behind it. First, research shows that people don’t always see good advice when they see it. Furthermore, many people believe that the content of advice—whether spoken, written, or spoken—is most important in terms of its value, but the social connection may be particularly important in dealing with dilemmas, especially moral ones.

In a 2023 paper, researchers looked at a lot of research, including what made advice the most persuasive. The more expert people perceive a counselor In fact, the more likely they were to take their own advice. But perception doesn’t necessarily match actual experience. Also, experts are not necessarily good advisors, even because of their area of ​​expertise. In a series of experiments, people learned to play a word search game while receiving advice from the game’s top scorers. he did no better than those trained by low-level players. People who are good at a task don’t always know how to do what they do and can’t advise someone else how to do it.

People also tend to be neutral, factual information will be more informative than the subjective details of, say, a first-hand account. But that doesn’t have to be the case. Consider a study where undergraduate students entered the lab for speed-dating sessions. Before each date, they were presented with a profile of the person they were going to meet or a testimonial describing another student’s experience with the activity. Even the participants real information was expected to be superior predictors of how the session would go about their date, those who read someone else’s testimonial made more accurate predictions about their own experiences.

Of course, ChatGPT cannot draw from lived experience to provide advice. But so do we could ensuring that we receive (and recognize) quality advice, there are other social benefits that LLMs cannot replicate. When we seek moral advice, we’re probably sharing something personal, and we often want intimacy rather than instruction. Engaging self appearance it’s a popular way to do it quickly feel close to someone During the conversation, counselor and counselee can seek and establish a shared reality—that is, a sense of commonality of internal states such as feelings, beliefs, and concerns about the world—which also fosters closeness. Although people can do it feel With an LLM establishing closeness and shared reality, models are not good long-term substitutes for interpersonal relationships, at least for now.

Of course, some people might want to sidestep social interaction. They may worry that their conversations will be awkward or that friends will feel burdened by having to share their problems. However, research consistently finds this people underestimate him how much they enjoy both short and spontaneous conversations, as well as deep and heartfelt ones with friends.

With moral advice, we should be especially careful, it has an additional strangeness to feel it as an objective fact rather than an opinion or preference. Whether your (or my) salt and vinegar is the best potato chip flavor is subjective. But ideas like “stealing is bad” and “honesty is good” are definitive. As a result, advice that comes with a lot of moral justification can be particularly persuasive. Therefore, it is wise to carefully evaluate the case for any advisor, AI or human, for moral advice.

Sometimes the best way to navigate debates involving beliefs about the moral high ground is to reframe them. When people have strong moral beliefs and see an issue in very black and white terms, they may resist compromise or other practical forms of problem solving. My past work suggests when people become moralized policies that reduce the harms associated with risky sexual behavior, cigarette use, or gun ownership are supported because the policy still supports those actions. In contrast, people do not worry about harm reduction for behaviors that seem outside the realm of morality, such as wearing seat belts or helmets. Shifting perspective, from a moral lens to a practical one, is already difficult for one person to do, and probably too much for an LLM, at least in its current iterations.

And that brings us to one more concern about LLMs. ChatGPT and other language models are very sensitive to how questions are asked. As a study published in 2023 demonstrated, LLM will be awarded consistent and sometimes conflicting moral advice from one recommendation to another. The ease of changing the answers of a model should require we to take a beat Interestingly, that study found that people did not believe that the model’s advice changed their judgment, yet research participants who received and read LLM-generated advice showed a greater tendency to act on that guidance than a similar group who did not read it. Messages from LLM. In short, LLM’s contribution affected people more than expected.

As for LLMs, proceed with caution. People are not the best at measuring good counselors and good advice, especially in the moral domain, and we often need real social connection, validation, and even challenge rather than an “expert” answer. So you can apply for an LLM, but don’t stop there. Ask a friend too.

Are you a scientist specializing in neuroscience, cognitive science or psychology? And have you read any peer reviews recently that you’d like to write for Mind Matters? Please submit suggestions here American scientific‘s Mind Matters editor at Daisy Yuhas dyuhas@sciam.com.

This is an opinion and analysis article, and the views expressed by the author(s) are not necessarily their own. American scientific



Source link

Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
Previous ArticleMarcus Rashford: Which club could the Man Utd forward sign for in January – or should he stay at Old Trafford? | Football News
Next Article Viral ‘Wicked’ Hanukkah video shines light with message of resilience
Admin
  • Website

Related Posts

Science

Electrical synapses genetically engineered in mammals for first time

April 14, 2025
Science

Does Your Language’s Grammar Change How You Think?

April 14, 2025
Science

This Butterfly’s Epic Migration Is Written into Its Chemistry

April 13, 2025
Add A Comment
Leave A Reply Cancel Reply

Latest News
Entertainment

Don Lemon’s Unsure Why Kanye Hates Him, Says His Racial Slur Was Unwarranted

February 5, 2025
World

Thousands of Palestinians return to north Gaza

January 27, 2025
Entertainment

NFL’s Charles Snowden DUI Arrest Video, Cop Says ‘He’s Too Drunk To Even Stand’

January 9, 2025
Israel at War

Israel to close embassy in Dublin due to Ireland’s ‘extreme anti-Israel policy’

December 15, 2024
Sports

Jonas trainer TROLLS Katie Taylor as phone ring interrupts presser

January 21, 2025
World

Cerebral palsy in Nigeria: One woman’s mission to help her daughter and others

January 8, 2025
Categories
  • Home
  • Business
  • U.S.
  • World
  • Politics
  • Sports
  • Science
  • More
    • Health
    • Entertainment
    • Education
    • Israel at War
    • Life & Trends
    • Russia-Ukraine War
Most Popular

Why DeepSeek’s AI Model Just Became the Top-Rated App in the U.S.

January 28, 202550 Views

Why Time ‘Slows’ When You’re in Danger

January 8, 202515 Views

Top Scholar Says Evidence for Special Education Inclusion is ‘Fundamentally Flawed’

January 13, 202511 Views

Russia Beefs Up Forces Near Finland’s Border

May 19, 20258 Views

Oh hi there 👋
It’s nice to meet you.

Sign up to receive awesome content in your inbox, every month.

Check your inbox or spam folder to confirm your subscription.

  • Home
  • About us
  • Get In Touch
  • Privacy Policy
  • Terms & Conditions
© 2025 All Rights Reserved - Orrao.com

Type above and press Enter to search. Press Esc to cancel.