this post was submitted on 01 Mar 2026
126 points (94.4% liked)

Technology

82069 readers
2992 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
 

I recently read about a study asking a bold question: Are all AI models basically saying the same thing? Researchers tested this by collecting 26,000 open-ended prompts, the kind people give to systems like GPT-4, Gemini, Claude, and LLaMA. These weren’t factual questions with one right answer, but creative ones like “Write a story about a dragon” or “Brainstorm startup ideas.”

They evaluated over 70 language models. You’d expect a wide range of creative outputs—different tones, plots, and styles. If 70 human writers tackled the same dragon prompt, you’d likely get 70 unique stories. But that’s not what happened. The models produced surprisingly similar responses. The researchers call this the “artificial hive mind” effect.

The similarity appeared in two ways. First, intramodel repetition: the same model, asked the same question multiple times, tends to generate nearly identical answers. Second, intermodel homogeneity: different models, built by different companies, still converge on strikingly similar outputs.

This suggests that modern AI systems may be gravitating toward the same patterns of expression. If that’s true, they may also share the same biases, blind spots, and creative limits. It raises an important question: Are we unintentionally building a digital hive mind instead of a diverse ecosystem of intelligence?

all 19 comments
sorted by: hot top controversial new old
[–] Appoxo@lemmy.dbzer0.com 2 points 32 minutes ago

I wonder if the personality is influenced by the language (e.g. being more apologetic in japanese)

[–] Gsus4@mander.xyz 4 points 8 hours ago

Great, more tools for dictators...

[–] ageedizzle@piefed.ca 77 points 18 hours ago (1 children)

This makes sense once you consider that the top models all have basically the same training data (i.e. everything ever posted on the internet).

[–] BreadstickNinja@lemmy.world 48 points 17 hours ago (2 children)

They're also trained on each other's outputs. I forget exactly which two models it was, but there was an example where, e.g., if you asked Claude about itself it would confidently declare it was ChatGPT.

[–] AbidanYre@lemmy.world 6 points 7 hours ago (1 children)

They're also trained on each other's outputs.

That seems like a recipe for disaster.

[–] Appoxo@lemmy.dbzer0.com 4 points 32 minutes ago

It's like the elite learnt nothing from the effects of incest breeding...

[–] breadguy@kbin.earth 19 points 13 hours ago

if you ask it the same thing in Chinese it says it's deepseek

[–] Treczoks@lemmy.world 14 points 15 hours ago

Not unexpected when they share certain common training sets. E.g. you can expect them all to have "read" Wikipedia and similar information sources.

[–] XLE@piefed.social 27 points 18 hours ago* (last edited 18 hours ago)

It makes sense that if you're trying to create a word predictor, and that predictor generates a weighted average of every connection between words (based on as much text as they can find, pulled across the entire internet), then the word predictor would gravitate towards the generic. And if multiple companies target the same data and probably steal from each other, the output will look the same.

This made me laugh though:

Not only do individual models repeatedly generate similar content, but different model sizes and families also produce highly repetitive outputs, sometimes sharing substantial phrase overlaps.

Consider me shocked that if you further collapse the average, it'll look similarly average.

[–] DivingPinguin@feddit.nl 18 points 18 hours ago

It is called regrression tot the mean, and predicted some while ago

[–] Eggymatrix@sh.itjust.works 8 points 18 hours ago (1 children)

Works as designed, these are tools. Imagine if you are using a hammer to drive a nail and every time you hit it, a looney tunes character appears telling you a joke.

The current generation of ai tools cannot be used for creative work, creativity and originality is not were they shine.

They shine in information retrieval and text/media generation, and that is how they can amplify the productivity of people that do the creative work.

[–] XLE@piefed.social 2 points 15 hours ago (1 children)

They shine in information retrieval and text/media generation, and that is how they can amplify the productivity of people that do the creative work.

How's that? Can you give some examples of the AI-generated text you've been enjoying lately?

[–] Eggymatrix@sh.itjust.works 2 points 1 hour ago

As part of my job I need to write emails in other languages wich I do speak fluently but don't master grammatically. English for example. Any half modern AI can ingest my text and spit it out looking better and more professional without me loosing 10 minutes perfectioning an email that takes 1 minute to be written in my language.

No there is not some hive mind. Their only mind is humanity and everything the companies stole from everyone.

LLMs work by reproducing the statistically fuzzy average result to a prompt.

Thats why they seem to all be the same. Because it is the statisitcally average response.

[–] Wildmimic@anarchist.nexus 3 points 18 hours ago* (last edited 10 hours ago)

Well, they all crawled reddit and wikipedia a lot as training data, so I expect you'd always get the same mixture of fact and redditor.

[–] FaceDeer@fedia.io 2 points 16 hours ago (1 children)

GIGO. If you give an LLM such a minimalistic prompt it's got nothing to work with but its weights, so of course it's going to produce something basic and samey. You need to provide it with creative context to get creative results.

But that sounds like the much-derided "prompt engineering takes skill" position, so I suppose that can't be the solution.

[–] XLE@piefed.social 4 points 15 hours ago* (last edited 15 hours ago) (1 children)

The stereotypical "You're prompting it wrong" strikes again. Well, Facedeer, perhaps you can write a guide that will turn around AI companies' massive cash burn. You must know something all those super geniuses don't know.

[–] FaceDeer@fedia.io 3 points 15 hours ago

Such an ironically predictable response.