this post was submitted on 22 Feb 2024
488 points (96.2% liked)

Technology

59534 readers
3195 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
 

Google apologizes for ‘missing the mark’ after Gemini generated racially diverse Nazis::Google says it’s aware of historically inaccurate results for its Gemini AI image generator, following criticism that it depicted historically white groups as people of color.

you are viewing a single comment's thread
view the rest of the comments
[–] kaffiene@lemmy.world 34 points 9 months ago* (last edited 9 months ago) (2 children)

Why would anyone expect "nuance" from a generative AI? It doesn't have nuance, it's not an AGI, it doesn't have EQ or sociological knowledge. This is like that complaint about LLMs being "warlike" when they were quizzed about military scenarios. It's like getting upset that the clunking of your photocopier clashes with the peaceful picture you asked it to copy

[–] UlrikHD@programming.dev 15 points 9 months ago (2 children)

I'm pretty sure it's generating racially diverse nazis due to companies tinkering with the prompts under the hood to counterweight biases in the training data. A naive implementation of generative AI wouldn't output black or Asian nazis.

it doesn't have EQ or sociological knowledge.

It sort of does (in a poor way), but they call it bias and tries to dampen it.

[–] kaffiene@lemmy.world 2 points 9 months ago

I don't disagree. The article complained about the lack of nuance in generating responses and I was responding to the ability of LLMs and Generative AI to exhibit that. Your points about bias I agree with

[–] echodot@feddit.uk 0 points 9 months ago (2 children)

At the moment AI is basically just a complicated kind of echo. It is fed data and it parrots it back to you with quite extensive modifications, but it's still the original data deep down.

At some point that won't be true and it will be a proper intelligence. But we're not there yet.

[–] maynarkh@feddit.nl 5 points 9 months ago

Nah, the problem here is literally that they would edit your prompt and add "of diverse races" to it before handing it to the black box, since the black box itself tends to reflect the built-in biases of training data and produce black prisoners and white scientists by itself.

[–] kaffiene@lemmy.world 1 points 9 months ago

I pretty much agree with that

[–] stockRot@lemmy.world -2 points 9 months ago (1 children)

Why shouldn't we expect more and better out of the technologies that we use? Seems like a very reactionary way of looking at the world

[–] kaffiene@lemmy.world 9 points 9 months ago

I DO expect better use from new technologies. I don't expect technologies to do things that they cannot. I'm not saying it's unreasonable to expect better technology I'm saying that expecting human qualities from an LLM is a category error