this post was submitted on 24 Aug 2024
37 points (63.9% liked)
Technology
59569 readers
3825 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each another!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
Approved Bots
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
I did some source digging to hopefully best address your observations. Science journalism (even when internal and likely done in concert with the authors) is fundamentally a game of telephone. But looking at the source papers:
They say it in an incredibly formal way, but they do seem to come to the conclusion that the LLM develops understanding. The paper makes that case within an incredibly narrow context, but it does include:
With it now clear that the generalized case is not shown: the specific type of understanding that they have shown is non-trivial.
I mostly get what you're saying, though I don't have the requisite understanding to follow formal proofs, but if there is one thing I do know for certain, it's that "understanding" is anthropomorphizing and shorthand for something that is very much not understanding in a human context at all.
I get that it can be hard to find the right words to explain a some of these emergent phenomena, but I think it's misleading to use words that make AI appear to have a thought process akin to anything we could understand as such—at least in settings where folks might not understand the shorthand as such.
And maybe everyone here is aware of that, but it makes me uneasy, hence this comment to hopefully make that point.
The paper is kind of saying that as well. I added a quote to the post to help set the context a bit more. As I understand it, they've shown that an LLM contains a model of its "world" (training data) and that this model becomes a more meaningful map of that "world" the longer the model is trained. Notably, they haven't shown that this model is actively employed when the LLM is generating text (robot commands in this case), only that it exists within the neural network and can be probed. And to be clear - its world is so dissimilar from ours, the form its understanding takes is likely to seem alien.
As someone who understands formal proofs, it's completely misleading to conflate formalism with sketchy pedagogical theories (wtf).
Yes, terminology like "understands" is a choice outside of formalism that's intentionally misleading for the sake of marketing/funding.
Genuine question: What evidence would make it seem likely to you that an AI "understands"? These papers are coming at an unyielding rate, so these conversations (regardless of the specifics) will continue. Do you have a test or threshold in mind?