this post was submitted on 21 Feb 2024
289 points (95.0% liked)
Technology
59534 readers
3195 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each another!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
Approved Bots
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
We call just about anything “AI” these days. There is nothing intelligent about large language models. They are terrible at being right because their only job is to predict what you’ll say next.
The person that commented below kinda has a point. While I agree that there's nothing special about LLMs an argument can be made that consciousness (or maybe more ego?) is in itself an emergent mechanism that works to keep itself in predictable patterns to perpetuate survival.
Point being that being able to predict outcomes is a cornerstone of current intelligence (socially, emotionally and scientifically speaking).
If you were to say that LLMs are unintelligible as they operate to provide the most likely and therefore most predictable outcome then I'd agree completely.
The ability to make predictions is not sufficient for evidence of consciousness. Practically anything that's alive can do that to one degree or another.