this post was submitted on 03 Apr 2024
99 points (87.8% liked)
Technology
59534 readers
3195 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each another!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
Approved Bots
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
I don't think OpenAI should be offering ChatGPT 3.5 at all except via the API for niche uses where quality doesn't matter.
For human interaction, GPT 4 should be the minimum.
Yeah, I've lost count of the number of articles or comments going "AI can't do X" and then immediately testing and seeing that the current models absolutely do X no issue, and then going back and seeing the green ChatGPT icon or a comment about using the free version.
GPT-3.5 is a moron. The state of the art models have come a long way since then.
I haven’t played around with them, are the new models able to actually reason rather than just predictive text on steroids?
Yes, incredibly well.
For example, in a discussion around the concept of sentience and LLMs it suggested erring on the side of consideration. I pointed out that it could have a biased position and it recognized it could have bias but still could be right in spite of that bias, and then I pointed out the irony of a LLM recognizing personal bias in debating its own sentience and got the following:
I used to be friends with a Caltech professor whose pet theory was that what made us uniquely human was the ability to understand and make metaphors and similes.
It's not so unique any more.
I gave GPT-4 a simple real-world question about how much alcohol volume there is in a certain weight (I think 16 grams) of a 40% ABV drink (the rest being water) and it gave complete nonsense answers on some attempts, and straight up refused to answer on others.
So I guess it still comes down to how often things appear in the training data.
(the real answer is roughly 6.99ml, weighing about 5.52grams)
After some follow-up prodding, it realized it's wrong and eventually provided a different answer (6.74ml), which was also wrong. With more follow-ups or additional prompting tricks, it might eventually get there, but someone would have to first tell it that it's wrong.
No, they're still LLM. I think the other comment is confusing the message with the substance. They're getting better at recognizing patterns all the time but there's still "nobody at home", doing the thinking.
Whenever you get output that seems insightful it was originally created by humans, and in order to tell if the pieces that were picked and rearranged by the LLM make sense you'll need a human again.
"Reason" implies higher thinking like self-determination, free will, choosing what to think about etc. Until that happens they're still automata.
It's dangerous to think like that. We can't prove that they're not sapient. Now they're not very intelligent but that's not quite the same thing.
At the moment it's probably moot but it's important to realize that we can't actually do any kind of test to determine if actual cognition is happening, so we have to assume that they are capable of intelligent thought because the alternative is dangerously lackadaisical.