this post was submitted on 31 Jul 2024
285 points (96.4% liked)

Technology

59534 readers
3197 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
 

Meta "programmed it to simply not answer questions," but it did anyway.

you are viewing a single comment's thread
view the rest of the comments
[–] snooggums@midwest.social 123 points 3 months ago (43 children)

Hallucinating is a fancy term for BEING WRONG.

Unreliable bullshit generator is still unreliable. Imagine that!

[–] doodledup@lemmy.world 51 points 3 months ago* (last edited 3 months ago) (39 children)

AI doesn't know what's wrong or correct. It hallucinates every answer. It's up to the supervisor to determine whether it's wrong or correct.

Mathematically verifying the correctness of these algorithms is a hard problem. It's intentional and the trade-off for the incredible efficiency.

Besides, it can only "know" what it has been trained on. It shouldn't be suprising that it cannot answer about the Trump shooting. Anyone who thinks otherwise simply doesn't know how to use these models.

[–] markon@lemmy.world 0 points 3 months ago (1 children)

Uhm. Have you ever talked to a human being.

[–] doodledup@lemmy.world 1 points 3 months ago* (last edited 3 months ago)

Human beings are not infallible either.

load more comments (37 replies)
load more comments (40 replies)