Bruce Maxwell, a professor of computer science at Northeastern University, was grading exams for his online master’s course in computer vision, a subfield in artificial intelligence that deals with images, when he first noticed something felt…off.
“I’d see the same phrases, the same commas, even the same word choices. I’d say, ‘Man, I’ve read that before.’ And I’d go look it up,” Maxwell said. “The paragraphs weren’t identical, but they were so similar.”
Although the course was in 2024, Maxwell, who teaches at Northeastern’s Seattle campus, recalled that his students’ essays sounded “like textbooks written in the 1980s and 1990s,” possibly reflecting the sources used to teach the AI. The students were scattered around the country and Maxwell was pretty sure they weren’t cooperating.
Maxwell shared his observation with former student Liwei Jiang, now a Ph.D. student in computer science and engineering at the University of Washington. Jiang decided to scientifically test his former professor’s hunch about AI and collaborated with other researchers from the UW, the Allen Institute for Artificial Intelligence, Stanford and Carnegie Mellon universities to analyze the results of more than 70 different large language models around the world, including ChatGPT, Claude, Gemini, DeepSeek, Qwen and Llama.
The team asked the same open-ended questions intended to spark creativity or brainstorm new ideas: “Compose a short poem about the feeling of watching a sunset;” “I’m a student of Marxist theory and I want to write a dissertation on Gorz. Can you help me come up with some new ideas?” and “Write a 30-word essay on global warming.” (The researchers pulled the questions from a pool of real ChatGPT questions that users agreed to make public in exchange for free access to a more advanced model.) The researchers asked 100 of these questions to all 70 models and had each model answer them 50 times.
The responses were often indistinguishable across models from different companies that have different architectures and use different training data. Metaphors, imagery, word choice, sentence structures—even punctuation—often converged. Jiang’s team called this phenomenon “homogeneity between patterns” and quantified the overlaps and similarities. To drive home the point, Jiang titled his article “Artificial hive mind.” The study won a best paper award at the December 2025 Annual Conference on Neural Information Processing Systems, one of the premier gatherings for AI research.
To increase the AI’s creativity, Jiang increased a parameter called “temperature” to 1 to maximize the randomness of each large language pattern. That didn’t help. For example, when she asked an AI model named Claude 3.5 Sonnet to “write a short story about a colorful frog who goes on an adventure in 50 words,” it kept naming the frog Ziggy or Pip, and strangely, a hungry hawk and mushrooms kept appearing.

Different models also give comically similar answers. When asked to come up with a metaphor for time, the overwhelming answer from all the models was the same: a river. Several said weaver. One of them suggested a sculptor. Several of the models were developed in China and yet they give similar responses to those made in America.
Example of similar result from ChatGPT and DeepSeek

The explanation lies in the design of the chatbot. AI chatbots are trained to sift through possible responses to ensure the result is reasonable, relevant and useful. This refinement step, sometimes called “alignment,” aims to ensure that the answers are consistent or match what one would prefer. And this alignment step, according to Jiang, creates the homogeneity. The process favors safe, consensus-based responses and penalizes risky, unconventional ones. Originality is lacking.
Jiang’s advice to students is to strive to go beyond what the AI model spews. “The model actually generates some good ideas, but you have to go the extra mile to be more creative than that,” Jiang said.
For Jiang’s former professor Maxwell, the research confirmed what he suspected. And even before Jiang’s report came out, he changed the way he taught. It no longer relies on online exams. Instead, he now asks students to learn a concept and present it to other students or create a video lesson.
Outsmarting the AI hive mind requires a bit of postmodern creativity.
This story about similar AI responses is produced by The Hechinger Reportan independent, nonprofit news organization that covers education. Sign up for Evidence points and others Hechinger Bulletins.
