this post was submitted on 27 Jan 2024
144 points (88.7% liked)
Technology
59627 readers
5585 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each another!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
Approved Bots
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
"ai" as people think of it is marketing spin.
Yeah, we had to rename AI to AGI, because marketing fuckers decided they're naming a (very smart) predictive model an AI. I've had a dumber version in my phone decades ago, this should've never been called AI.
AI and AGI have never been the same thing.
It's so annoying how suddenly everyone's so convinced that "AI" is some highly specific thing that hasn't been accomplished yet. Artificial intelligence is an extremely broad subject of computer science and things that fit the description have been around for decades. The journal Artificial Intelligence was first published in 1970, 54 years ago.
We've got something that's passing the frickin' Turing test now, and all of a sudden the term "artificial intelligence" is too good for that? Bah.
We dont have anything that passes the Turing test. The test isnt just "does it trick people casually talking to it into thinking its a person" its can it decieve a pannel of experts deliberately try to tease out which one of the "people" they are talking to isnt a human.
AFAIK no LLM has passed a rigourious test like that.
GPT4 ironically fails the Turing test by possessing such a wide knowledge on variety of topics that it's obvious it can't be a human. Basically it's too competent to be a human even despite its flaws
This is my problem with the conversation. It doesn't "posses knowledge" like we think of with humans. It repeats stuff it's seen before. It doesn't understand the context in which it was encountered. It doesn't know if it came from a sci-fi book or a scientific journal, and it doesn't understand the difference. It has no knowledge of the world and how things interact. It only appears knowledgeable because it can basically memorize a lot of things, but it doesn't understand them.
It's like cramming for a test. You may pass the test, but it doesn't mean you actually understand the material. You could only repeat what you read. Knowledge requires actually understanding why the material is what it is.
Yeah, and in no way could it. Just ask how many words are in its reply and it will say, "There are 37 words in this reply." It's not even vaguely convincing.
Yeah, it should just say "Why would you ask me such a stupid question? Count them yourself."
Nobody is doing these tests, but it's not uncommon these days for mistaking something for being AI generated. Even in professional settings, people are hypervigilant.
Nothing can pass the turing test for me, because I'm pretty sure everyone is a robot including me.
Here's the summary for the wikipedia article you mentioned in your comment:
Artificial Intelligence is a scientific journal on artificial intelligence research. It was established in 1970 and is published by Elsevier. The journal is abstracted and indexed in Scopus and Science Citation Index. The 2021 Impact Factor for this journal is 14. 05 and the 5-Year Impact Factor is 11.
^to^ ^opt^ ^out^^,^ ^pm^ ^me^ ^'optout'.^ ^article^ ^|^ ^about^