this post was submitted on 21 Feb 2024
289 points (95.0% liked)

Technology

59569 readers
3431 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
 

ChatGPT has meltdown and starts sending alarming messages to users::AI system has started speaking nonsense, talking Spanglish without prompting, and worrying users by suggesting it is in the room with them

you are viewing a single comment's thread
view the rest of the comments
[–] snooggums@midwest.social 74 points 9 months ago (25 children)

To be honest this is the kind of outcome I expected.

Garbage in, garbage out. Making the system more complex doesn't solve that problem.

[–] thehatfox@lemmy.world 49 points 9 months ago (14 children)

The development of LLMs is possibly becoming self defeating, because the training data is being filled not just with human garbage, but also AI garbage from previous, cruder LLMs.

We may well end up with a machine learning equivalent of Kessler syndrome, with our pool of available knowledge eventually becoming too full of junk to progress.

[–] CarbonIceDragon@pawb.social 19 points 9 months ago (7 children)

I mean, surely the solution to that would be to use curated/vetted training data? Or at the very least, data from before LLMs became commonplace?

[–] KevonLooney@lemm.ee 19 points 9 months ago (1 children)

The funny thing is, children are similar. They just learn whatever you put in front of them. We have whole systems for educating children for decades of their lives.

With AI we literally just plopped them in front of the Internet, with no guidelines on what to learn. AI researchers say "it's a black box! We don't know why it's doing this!" You fed it everything you could and gave it few rules on what to do. You are the reason why it's nuts.

Humans come hardwired to be a certain way, do certain things. Maybe they need to start AI off like that, some basic programs that guide learning. "Learn everything" isn't working.

[–] thehatfox@lemmy.world 8 points 9 months ago (1 children)

Humans come hardwired to be a certain way, do certain things. Maybe they need to start AI off like that, some basic programs that guide learning. “Learn everything” isn’t working.

That's a good point. For real brains, size and intelligence are not linked. An elephant brain has 3 times the amount of neurons as a human brain, but a human brain is more intelligent. There is more to intelligence than just the amount of neutrons, real or virtual, so making larger and larger AI models may not be the right direction.

[–] KevonLooney@lemm.ee 5 points 9 months ago (1 children)

True. Maybe they just need more error correction. Like spend more energy questioning whether what you say is true. Right now LLMs seems to just vomit out whatever they thought up, with no consideration of whether it makes sense.

They're like an annoying friend who just can't shut up.

[–] nilloc@discuss.tchncs.de 2 points 9 months ago

They aren’t thinking though. They’re making connection with the trained data that they’ve processed.

This is really clear when they are asked to write code worth to vague a prompt.

Maybe feeding them through primary school curriculum (including essays and tests) would be helpful, but I don’t think the language models really sort knowledge yet.

load more comments (5 replies)
load more comments (11 replies)
load more comments (21 replies)