this post was submitted on 15 Jun 2024
35 points (60.4% liked)

Technology

59534 readers
3183 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] dustyData@lemmy.world 42 points 5 months ago* (last edited 5 months ago) (1 children)

Turing test isn't actually meant to be a scientific or accurate test. It was proposed as a mental exercise to demonstrate a philosophical argument. Mainly the support for machine input-output paradigm and the blackbox construct. It wasn't meant to say anything about humans either. To make this kind of experiments without any sort of self-awareness is just proof that epistemology is a weak topic in computer science academy.

Specially when, from psychology, we know that there's so much more complexity riding on such tests. Just to name one example, we know expectations alter perception. A Turing test suffers from a loaded question problem. If you prompt a person telling them they'll talk with a human, with a computer program or announce before hand they'll have to decide whether they're talking with a human or not, and all possible combinations, you'll get different results each time.

Also, this is not the first chatbot to pass the Turing test. Technically speaking, if only one human is fooled by a chatbot to think they're talking with a person, then they passed the Turing test. That is the extend to which the argument was originally elaborated. Anything beyond is alterations added to the central argument by the author's self interests. But this is OpenAI, they're all about marketing aeh fuck all about the science.

EDIT: Just finished reading the paper, Holy shit! They wrote this “Turing originally envisioned the imitation game as a measure of intelligence” (p. 6, Jones & Bergen), and that is factually wrong. That is a lie. “A variety of objections have been raised to this idea”, yeah no shit Sherlock, maybe because he never said such a thing and there's absolutely no one and nothing you can quote to support such outrageous affirmation. This shit shouldn't ever see publication, it should not pass peer review. Turing never, said such a thing.

[–] kogasa@programming.dev 2 points 5 months ago (1 children)

Your first two paragraphs seem to rail against a philosophical conclusion made by the authors by virtue of carrying out the Turing test. Something like "this is evidence of machine consciousness" for example. I don't really get the impression that any such claim was made, or that more education in epistemology would have changed anything.

In a world where GPT4 exists, the question of whether one person can be fooled by one chatbot in one conversation is long since uninteresting. The question of whether specific models can achieve statistically significant success is maybe a bit more compelling, not because it's some kind of breakthrough but because it makes a generalized claim.

Re: your edit, Turing explicitly puts forth the imitation game scenario as a practicable proxy for the question of machine intelligence, "can machines think?". He directly argues that this scenario is indeed a reasonable proxy for that question. His argument, as he admits, is not a strongly held conviction or rigorous argument, but "recitations tending to produce belief," insofar as they are hard to rebut, or their rebuttals tend to be flawed. The whole paper was to poke at the apparent differences between (a futuristic) machine intelligence and human intelligence. In this way, the Turing test is indeed a measure of intelligence. It's not to say that a machine passing the test is somehow in possession of a human-like mind or has reached a significant milestone of intelligence.

https://academic.oup.com/mind/article/LIX/236/433/986238

[–] dustyData@lemmy.world 0 points 5 months ago* (last edited 5 months ago) (1 children)

Turing never said anything of the sort, "this is a test for intelligence". Intelligence and thinking are not the same. Humans have plenty of unintelligent behaviors, that has no bearing on their ability to think. And plenty of animals display intelligent behavior but that is not evidence of their ability to think. Really, if you know nothing about epistemology, just shut up, nobody likes your stupid LLMs and the marketing is tiring already, and the copyright infringement and rampant privacy violations and property theft and insatiable power hunger are not worthy.