this post was submitted on 10 Jan 2024
182 points (94.2% liked)

Technology

59589 readers
3300 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] burliman@lemmy.world 34 points 10 months ago (3 children)

Bad humans are prompting these AI engines. Still gotta fix that. You know, root of the problem. I can tell you as an older human, misinformation has been supercharged every election. But yeah let’s blame AI this time around so we don’t have to figure out the tough problem.

[–] saltesc@lemmy.world 12 points 10 months ago* (last edited 10 months ago)

Correct. AI is simply a tool. People need to get their heads around this and stop perceiving it as some sentient magical entity with rogue prerogatives and uncontested liberties.

Whenever AI does something whack, that was a human. Everything it knows and does comes from the knowledge and instructions of humans. It's us. If AI produces misinformation, it's simply doing what it was taught and instructed by someone, and there lies the source of bullshit.

[–] Phanatik@kbin.social 4 points 10 months ago

The problem isn't the misinformation itself, it's the rate at which misinformation is produced. Generative models lower the barrier to entry so anyone in their living room somewhere can make deepfakes of your favourite politician. The blame isn't on AI for creating misinformation, it's for making the situation worse.

[–] hellothere@sh.itjust.works 3 points 10 months ago

Fallible humans are building them in the first place.

No LLM - masquerading as AI - is free of biases.

That's not to say that 'bad' people prompting biased LLMs is not an issue, it very much is, but even 'good' people are not going to get objective results.