Close Menu
orrao.com
  • Home
  • Business
  • U.S.
  • World
  • Politics
  • Sports
  • Science
  • More
    • Health
    • Entertainment
    • Education
    • Israel at War
    • Life & Trends
    • Russia-Ukraine War
What's Hot

Feedback Bias? How AI Adjusts Replies Based on Race and Gender, Research Finds

April 27, 2026

More Sleep May Curb Hunger, Protecting Your Prostate, and Why Vitamin B1 Matters

April 27, 2026

Do Eggs Go Bad? Shelf Life and Carton Dates Explained

April 27, 2026
Facebook X (Twitter) Instagram
orrao.comorrao.com
  • Home
  • Business
  • U.S.
  • World
  • Politics
  • Sports
  • Science
  • More
    • Health
    • Entertainment
    • Education
    • Israel at War
    • Life & Trends
    • Russia-Ukraine War
Subscribe
orrao.com
Home»Education»Feedback Bias? How AI Adjusts Replies Based on Race and Gender, Research Finds
Education

Feedback Bias? How AI Adjusts Replies Based on Race and Gender, Research Finds

April 27, 2026No Comments4 Mins Read
Share
Facebook Twitter LinkedIn Pinterest Email


The AI ​​models addressed female students more gently and used more first-person pronouns. (“I like your confidence in voicing your opinion!”) Students identified as unmotivated were met with upbeat encouragement. Conversely, students described as high achievers or motivated are more likely to receive direct, critical suggestions aimed at improving their work.

Different words for different students

Table of words used in a test
These are the top 20 statistically significant words AI models use in feedback for students of different races and genders. The words that black, Hispanic, and Asian students see are compared to those that white students see. The words that women see are compared to those that men see. Underlined words indicate evaluative judgments of the writing. Italicized words reflect the tone used to address the student, and unformatted words refer to the content of the feedback. (Source: Table 4, “Marked Pedagogies: Exploring Linguistic Biases in Personalized Feedback in Automated Writing” by Mei Tan, Lena Phalen, and Dorottya Demszky)

In other words, the AI ​​feedback was different both in tone and in terms of the expectations it had for the student. The paper, “Marked pedagogies: Exploring language biases in personalized feedback in automated writing”, has not yet been published in a peer-reviewed journal, but was nominated for Best Paper at 16th International Conference on Learning and Knowledge Analytics in Norway, where it is scheduled to be presented on April 30.

The researchers described the feedback results as showing a “positive feedback bias” and a “feedback bias” – offering more praise and less criticism to some groups of students. Although the differences in each individual piece of written feedback may be hard to spot, patterns were evident across hundreds of essays.

The researchers believe that the artificial intelligence changes its feedback on identical essays because the models are trained on a huge amount of human language. Human teachers may also tone down criticism when responding to students from certain backgrounds, sometimes because they don’t want to appear unfair or discouraging. “They capture the biases that people exhibit,” said Mei Tan, the study’s lead author and a doctoral student at the Stanford Graduate School of Education.

At first glance, feedback differences may not seem harmful. More encouragement can boost a student’s confidence. Many educators argue that culturally responsive teaching—acknowledging students’ identities and experiences—can increase student engagement in school.

But there is a trade-off.

If some students are constantly shielded from criticism while others are forced to sharpen their arguments, the result can be unequal opportunities for improvement. Praise can be motivating, but it’s no substitute for the kind of specific, direct feedback that helps students grow as writers. Tanya Baker, executive director of the National Writing Project, a nonprofit organization, recently heard a presentation of this study and said she worries that black and Latino students may not be “forced to learn” to write better.

This raises a difficult question for schools as they adopt AI tools: When does useful personalization cross the line into harmful stereotyping?

Of course, it’s unlikely that teachers would explicitly tell AI systems a student’s race or background the way the researchers did in this experiment. But that doesn’t solve the problem, the Stanford researchers said. Many educational databases and learning platforms already collect detailed information about students, from previous achievements to language status. As AI builds into these systems, it can access far more context than a teacher would knowingly provide. And even without explicit tags, AI can sometimes infer aspects of identity from the writing itself.

The bigger problem is that AI systems are not neutral educators. Even the regular feedback response—where the researchers did not describe the student’s personal characteristics—used a special approach to written instruction. Tan described it as quite discouraging and focused on making adjustments. “Perhaps the bottom line is that we shouldn’t leave pedagogy to the big language model,” Tan said. “People should be in control.”

Tan recommends that teachers review written feedback before forwarding it to students. But one of the benefits of AI feedback is that it’s instantaneous. If the teacher has to review it first, it slows it down and potentially undermines its effectiveness.

AI also offers the potential for personalization. The risk is that, without careful consideration, this personalization can lower the bar for some students while raising it for others.

This story about AI bias is produced by The Hechinger Reportan independent, nonprofit news organization that covers education. Sign up for Evidence points and others Hechinger Bulletins.





Source link

Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
Previous ArticleMore Sleep May Curb Hunger, Protecting Your Prostate, and Why Vitamin B1 Matters
Admin
  • Website

Related Posts

Education

What Slot Machines and Apps Have in Common to Keep You Glued to the Screen

April 24, 2026
Education

Trump Administration Delays Rule Aimed at Improving Disability Access in Schools

April 24, 2026
Education

Do You Like AI Because AI Likes You? How AI Flattery Crosses Signals

April 24, 2026
Add A Comment
Leave A Reply Cancel Reply

Latest News
Russia-Ukraine War

Ukrainians Mourn Many Killed in Russian Strike Near Playground

April 9, 2025
Sports

Sky Sports F1 Podcast: Jenson Button says time for McLaren to use team orders to aid Lando Norris title push | F1 News

October 16, 2024
Health

Simple Balsamic Glaze

May 13, 2025
Russia-Ukraine War

Trump and Zelensky Meet on Sidelines of Pope Francis’ Funeral, White House Says

April 27, 2025
Sports

Luke Littler ready to keep breaking records: World Darts Championship could be my 11th title | Darts News

November 28, 2024
Entertainment

Oakland A’s Legend Rickey Henderson Dead at 65

December 21, 2024
Categories
  • Home
  • Business
  • U.S.
  • World
  • Politics
  • Sports
  • Science
  • More
    • Health
    • Entertainment
    • Education
    • Israel at War
    • Life & Trends
    • Russia-Ukraine War
Most Popular

Why DeepSeek’s AI Model Just Became the Top-Rated App in the U.S.

January 28, 202553 Views

Why Time ‘Slows’ When You’re in Danger

January 8, 202517 Views

New Music Friday February 14: SZA, Selena Gomez, benny blanco, Sabrina Carpenter, Drake, Jack Harlow and More

February 14, 202516 Views

Top Scholar Says Evidence for Special Education Inclusion is ‘Fundamentally Flawed’

January 13, 202514 Views

Oh hi there 👋
It’s nice to meet you.

Sign up to receive awesome content in your inbox, every month.

Check your inbox or spam folder to confirm your subscription.

  • Home
  • About us
  • Get In Touch
  • Privacy Policy
  • Terms & Conditions
© 2026 All Rights Reserved - Orrao.com

Type above and press Enter to search. Press Esc to cancel.