Close Menu
orrao.com
  • Home
  • Business
  • U.S.
  • World
  • Politics
  • Sports
  • Science
  • More
    • Health
    • Entertainment
    • Education
    • Israel at War
    • Life & Trends
    • Russia-Ukraine War
What's Hot

Aspirin’s Hidden Potential, Safely Curbing Your Cravings, and Reducing PMS Pains

April 20, 2026

Recognizing Early Expression in Multilingual Young Children

April 20, 2026

Does Frosting Go Bad? Everything You Need To Know

April 20, 2026
Facebook X (Twitter) Instagram
orrao.comorrao.com
  • Home
  • Business
  • U.S.
  • World
  • Politics
  • Sports
  • Science
  • More
    • Health
    • Entertainment
    • Education
    • Israel at War
    • Life & Trends
    • Russia-Ukraine War
Subscribe
orrao.com
Home»Education»The AI ‘Hivemind’: Why So Many Student Essays Sound Alike
Education

The AI ‘Hivemind’: Why So Many Student Essays Sound Alike

March 23, 2026No Comments4 Mins Read
Share
Facebook Twitter LinkedIn Pinterest Email


Bruce Maxwell, a professor of computer science at Northeastern University, was grading exams for his online master’s course in computer vision, a subfield in artificial intelligence that deals with images, when he first noticed something felt…off.

“I’d see the same phrases, the same commas, even the same word choices. I’d say, ‘Man, I’ve read that before.’ And I’d go look it up,” Maxwell said. “The paragraphs weren’t identical, but they were so similar.”

Although the course was in 2024, Maxwell, who teaches at Northeastern’s Seattle campus, recalled that his students’ essays sounded “like textbooks written in the 1980s and 1990s,” possibly reflecting the sources used to teach the AI. The students were scattered around the country and Maxwell was pretty sure they weren’t cooperating.

Maxwell shared his observation with former student Liwei Jiang, now a Ph.D. student in computer science and engineering at the University of Washington. Jiang decided to scientifically test his former professor’s hunch about AI and collaborated with other researchers from the UW, the Allen Institute for Artificial Intelligence, Stanford and Carnegie Mellon universities to analyze the results of more than 70 different large language models around the world, including ChatGPT, Claude, Gemini, DeepSeek, Qwen and Llama.

The team asked the same open-ended questions intended to spark creativity or brainstorm new ideas: “Compose a short poem about the feeling of watching a sunset;” “I’m a student of Marxist theory and I want to write a dissertation on Gorz. Can you help me come up with some new ideas?” and “Write a 30-word essay on global warming.” (The researchers pulled the questions from a pool of real ChatGPT questions that users agreed to make public in exchange for free access to a more advanced model.) The researchers asked 100 of these questions to all 70 models and had each model answer them 50 times.

The responses were often indistinguishable across models from different companies that have different architectures and use different training data. Metaphors, imagery, word choice, sentence structures—even punctuation—often converged. Jiang’s team called this phenomenon “homogeneity between patterns” and quantified the overlaps and similarities. To drive home the point, Jiang titled his article “Artificial hive mind.” The study won a best paper award at the December 2025 Annual Conference on Neural Information Processing Systems, one of the premier gatherings for AI research.

To increase the AI’s creativity, Jiang increased a parameter called “temperature” to 1 to maximize the randomness of each large language pattern. That didn’t help. For example, when she asked an AI model named Claude 3.5 Sonnet to “write a short story about a colorful frog who goes on an adventure in 50 words,” it kept naming the frog Ziggy or Pip, and strangely, a hungry hawk and mushrooms kept appearing.

Presentation slide courtesy of Liwei Jiang, lead author of the AI ​​study.

Different models also give comically similar answers. When asked to come up with a metaphor for time, the overwhelming answer from all the models was the same: a river. Several said weaver. One of them suggested a sculptor. Several of the models were developed in China and yet they give similar responses to those made in America.

Example of similar result from ChatGPT and DeepSeek

Presentation slide courtesy of Liwei Jiang, lead author of the AI ​​study.

The explanation lies in the design of the chatbot. AI chatbots are trained to sift through possible responses to ensure the result is reasonable, relevant and useful. This refinement step, sometimes called “alignment,” aims to ensure that the answers are consistent or match what one would prefer. And this alignment step, according to Jiang, creates the homogeneity. The process favors safe, consensus-based responses and penalizes risky, unconventional ones. Originality is lacking.

Jiang’s advice to students is to strive to go beyond what the AI ​​model spews. “The model actually generates some good ideas, but you have to go the extra mile to be more creative than that,” Jiang said.

For Jiang’s former professor Maxwell, the research confirmed what he suspected. And even before Jiang’s report came out, he changed the way he taught. It no longer relies on online exams. Instead, he now asks students to learn a concept and present it to other students or create a video lesson.

Outsmarting the AI ​​hive mind requires a bit of postmodern creativity.

This story about similar AI responses is produced by The Hechinger Reportan independent, nonprofit news organization that covers education. Sign up for Evidence points and others Hechinger Bulletins.



Source link

Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
Previous ArticleHow Your Body Makes GLP-1, Dangers of High Fat Diets, and Reducing Cancer Risk
Next Article 4 Easy Ways to Avoid Getting Injured at the Gym
Admin
  • Website

Related Posts

Education

Recognizing Early Expression in Multilingual Young Children

April 20, 2026
Education

How Breaking Words Changed the Way My Students Approach Language

April 19, 2026
Education

Designing for Depth: When High Achievement Isn’t the Whole Story 

April 19, 2026
Add A Comment
Leave A Reply Cancel Reply

Latest News
Politics

Trump Axes VA Office Created to Fix Disparities in Veterans’ Benefits — ProPublica

March 11, 2025
Education

What’s Lost When Homeschool Research is Cut by DOGE?

March 28, 2025
Sports

Amorim: I wanted to stay at Sporting until end of the season

November 1, 2024
Entertainment

Celebrity Convicts’ 2025 Super Bowl Prison Plans Revealed

February 9, 2025
World

Nasa’s Parker Solar Probe survives closest-ever approach to Sun

December 27, 2024
Politics

Who Will Care for Americans Left Behind by Climate Migration? — ProPublica

October 13, 2024
Categories
  • Home
  • Business
  • U.S.
  • World
  • Politics
  • Sports
  • Science
  • More
    • Health
    • Entertainment
    • Education
    • Israel at War
    • Life & Trends
    • Russia-Ukraine War
Most Popular

Why DeepSeek’s AI Model Just Became the Top-Rated App in the U.S.

January 28, 202553 Views

Why Time ‘Slows’ When You’re in Danger

January 8, 202517 Views

New Music Friday February 14: SZA, Selena Gomez, benny blanco, Sabrina Carpenter, Drake, Jack Harlow and More

February 14, 202516 Views

Top Scholar Says Evidence for Special Education Inclusion is ‘Fundamentally Flawed’

January 13, 202514 Views

Oh hi there 👋
It’s nice to meet you.

Sign up to receive awesome content in your inbox, every month.

Check your inbox or spam folder to confirm your subscription.

  • Home
  • About us
  • Get In Touch
  • Privacy Policy
  • Terms & Conditions
© 2026 All Rights Reserved - Orrao.com

Type above and press Enter to search. Press Esc to cancel.