Should I tell my friend that her boyfriend is cheating on her? Should I intervene when I hear an off-color joke?
When faced with moral questions (in situations where the course of action is related to a sense of right and wrong) we often seek advice. And now people can turn to ChatGPT and other large language models (LLM) for guidance.
Many people seem happy with the answers these models provide. In a preprinted study, people overrated the responses LLMs produced when presented with moral dilemmas. reliable, trustworthy and even responsible than those of New York Times Ethics columnist Kwame Anthony Appiah.
About supporting science journalism
If you like this article, please consider supporting our award-winning journalism subscribe. By purchasing a subscription, you’re helping to ensure a future of impactful stories about the discoveries and ideas that shape our world.
That study joins several others that suggest LLMs can offer strong moral advice. Another published last April found that people rated an AI’s reasoning the “higher” of man. virtuous, intelligent and reliable. Some researchers have also suggested that LLMs can be trained to provide ethical financial guidance “despite”inherently sociopathic“.
These findings mean that virtuosic ethical advice is within our grasp; so why not ask an LLM? But this takeover has several questionable assumptions behind it. First, research shows that people don’t always see good advice when they see it. Furthermore, many people believe that the content of advice—whether spoken, written, or spoken—is most important in terms of its value, but the social connection may be particularly important in dealing with dilemmas, especially moral ones.
In a 2023 paper, researchers looked at a lot of research, including what made advice the most persuasive. The more expert people perceive a counselor In fact, the more likely they were to take their own advice. But perception doesn’t necessarily match actual experience. Also, experts are not necessarily good advisors, even because of their area of expertise. In a series of experiments, people learned to play a word search game while receiving advice from the game’s top scorers. he did no better than those trained by low-level players. People who are good at a task don’t always know how to do what they do and can’t advise someone else how to do it.
People also tend to be neutral, factual information will be more informative than the subjective details of, say, a first-hand account. But that doesn’t have to be the case. Consider a study where undergraduate students entered the lab for speed-dating sessions. Before each date, they were presented with a profile of the person they were going to meet or a testimonial describing another student’s experience with the activity. Even the participants real information was expected to be superior predictors of how the session would go about their date, those who read someone else’s testimonial made more accurate predictions about their own experiences.
Of course, ChatGPT cannot draw from lived experience to provide advice. But so do we could ensuring that we receive (and recognize) quality advice, there are other social benefits that LLMs cannot replicate. When we seek moral advice, we’re probably sharing something personal, and we often want intimacy rather than instruction. Engaging self appearance it’s a popular way to do it quickly feel close to someone During the conversation, counselor and counselee can seek and establish a shared reality—that is, a sense of commonality of internal states such as feelings, beliefs, and concerns about the world—which also fosters closeness. Although people can do it feel With an LLM establishing closeness and shared reality, models are not good long-term substitutes for interpersonal relationships, at least for now.
Of course, some people might want to sidestep social interaction. They may worry that their conversations will be awkward or that friends will feel burdened by having to share their problems. However, research consistently finds this people underestimate him how much they enjoy both short and spontaneous conversations, as well as deep and heartfelt ones with friends.
With moral advice, we should be especially careful, it has an additional strangeness to feel it as an objective fact rather than an opinion or preference. Whether your (or my) salt and vinegar is the best potato chip flavor is subjective. But ideas like “stealing is bad” and “honesty is good” are definitive. As a result, advice that comes with a lot of moral justification can be particularly persuasive. Therefore, it is wise to carefully evaluate the case for any advisor, AI or human, for moral advice.
Sometimes the best way to navigate debates involving beliefs about the moral high ground is to reframe them. When people have strong moral beliefs and see an issue in very black and white terms, they may resist compromise or other practical forms of problem solving. My past work suggests when people become moralized policies that reduce the harms associated with risky sexual behavior, cigarette use, or gun ownership are supported because the policy still supports those actions. In contrast, people do not worry about harm reduction for behaviors that seem outside the realm of morality, such as wearing seat belts or helmets. Shifting perspective, from a moral lens to a practical one, is already difficult for one person to do, and probably too much for an LLM, at least in its current iterations.
And that brings us to one more concern about LLMs. ChatGPT and other language models are very sensitive to how questions are asked. As a study published in 2023 demonstrated, LLM will be awarded consistent and sometimes conflicting moral advice from one recommendation to another. The ease of changing the answers of a model should require we to take a beat Interestingly, that study found that people did not believe that the model’s advice changed their judgment, yet research participants who received and read LLM-generated advice showed a greater tendency to act on that guidance than a similar group who did not read it. Messages from LLM. In short, LLM’s contribution affected people more than expected.
As for LLMs, proceed with caution. People are not the best at measuring good counselors and good advice, especially in the moral domain, and we often need real social connection, validation, and even challenge rather than an “expert” answer. So you can apply for an LLM, but don’t stop there. Ask a friend too.
Are you a scientist specializing in neuroscience, cognitive science or psychology? And have you read any peer reviews recently that you’d like to write for Mind Matters? Please submit suggestions here American scientific‘s Mind Matters editor at Daisy Yuhas dyuhas@sciam.com.
This is an opinion and analysis article, and the views expressed by the author(s) are not necessarily their own. American scientific