this post was submitted on 03 Apr 2024
960 points (99.4% liked)

Technology

59534 readers
3199 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
 

A judge in Washington state has blocked video evidence that’s been “AI-enhanced” from being submitted in a triple murder trial. And that’s a good thing, given the fact that too many people seem to think applying an AI filter can give them access to secret visual data.

you are viewing a single comment's thread
view the rest of the comments
[–] Natanael@slrpnk.net 6 points 7 months ago* (last edited 7 months ago) (7 children)

There's a lot of other layers in brains that's missing in machine learning. These models don't form world models and ~~some~~don't have an understanding of facts and have no means of ensuring consistency, to start with.

[–] lightstream@lemmy.ml 2 points 7 months ago (5 children)

They absolutely do contain a model of the universe which their answers must conform to. When an LLM hallucinates, it is creating a new answer which fits its internal model.

[–] Natanael@slrpnk.net 1 points 7 months ago (4 children)

Statistical associations is not equivalent to a world model, especially because they're neither deterministic nor even tries to prevent giving up conflicting answers. It models only use of language

[–] lightstream@lemmy.ml 1 points 7 months ago (1 children)

It models only use of language

This phrase, so casually deployed, is doing some seriously heavy lifting. Lanuage is by no means a trivial thing for a computer to meaningfully interpret, and the fact that LLMs do it so well is way more impressive than a casual observer might think.

If you look at earlier procedural attempts to interpret language programmatically, you will see that time and again, the developers get stopped in their tracks because in order to understand a sentence, you need to understand the universe - or at the least a particular corner of it. For example, given the sentence "The stolen painting was found by a tree", you need to know what a tree is in order to interpret this correctly.

You can't really use language *unless* you have a model of the universe.

[–] Natanael@slrpnk.net 1 points 7 months ago* (last edited 7 months ago) (1 children)

But it doesn't model the actual universe, it models rumor mills

Today's LLM is the versificator machine of 1984. It cares not for truth, it cares for distracting you

[–] lightstream@lemmy.ml 1 points 7 months ago (1 children)

They are remarkably useful. Of course there are dangers relating to how they are used, but sticking your head in the sand and pretending they are useless accomplishes nothing.

[–] Natanael@slrpnk.net 1 points 7 months ago

They are more useful for quick templates than problem solving

load more comments (2 replies)
load more comments (2 replies)
load more comments (3 replies)