this post was submitted on 05 Feb 2024
661 points (88.2% liked)

Memes

45727 readers
998 users here now

Rules:

  1. Be civil and nice.
  2. Try not to excessively repost, as a rule of thumb, wait at least 2 months to do it if you have to.

founded 5 years ago
MODERATORS
 

I think AI is neat.

you are viewing a single comment's thread
view the rest of the comments
[–] DrJenkem@lemmy.blugatch.tube 167 points 9 months ago (65 children)

They're kind of right. LLMs are not general intelligence and there's not much evidence to suggest that LLMs will lead to general intelligence. A lot of the hype around AI is manufactured by VCs and companies that stand to make a lot of money off of the AI branding/hype.

[–] skilltheamps@feddit.de -3 points 9 months ago* (last edited 9 months ago) (2 children)

Yes. But the more advanced LLMs get, the less it matters in my opinion. I mean of you have two boxes, one of which is actually intelligent and the other is "just" a very advanced parrot - it doesn't matter, given they produce the same output. I'm sure that already LLMs can surpass some humans, at least at certain disciplines. In a couple years the difference of a parrot-box and something actually intelligent will only merely show at the very fringes of massively complicated tasks. And that is way beyond the capability threshold that allows to do nasty stuff with it, to shed a dystopian light on it.

[–] DrJenkem@lemmy.blugatch.tube 9 points 9 months ago (2 children)

I mean of you have two boxes, one of which is actually intelligent and the other is "just" a very advanced parrot - it doesn't matter, given they produce the same output.

You're making a huge assumption; that an advanced parrot produces the same output as something with general intelligence. And I reject that assumption. Something with general intelligence can produce something novel. An advanced parrot can only repeat things it's already heard.

[–] Takumidesh@lemmy.world 4 points 9 months ago (1 children)

How do you define novel? Because LLMs absolutely have produced novel data.

[–] rambaroo@lemmy.world -1 points 9 months ago

LLMs can't produce anything without being prompted by a human. There's nothing intelligent about them. Imo it's an abuse of the word intelligence since they have exactly zero autonomy.

[–] Meowoem@sh.itjust.works -2 points 9 months ago (1 children)

I use LLMs to create things no human has likely ever said and it's great at it, for example

'while juggling chainsaws atop a unicycle made of marshmallows, I pondered the existential implications of the colour blue on a pineapples dream of becoming a unicorn'

When I ask it to do the same using neologisms the output is even better, one of the words was exquimodal which I then asked for it to invent an etymology and it came up with one that combined excuistus and modial to define it as something beyond traditional measures which fits perfectly into the sentence it created.

You can't ask a parrot to invent words with meaning and use them in context, that's a step beyond repetition - of course it's not full dynamic self aware reasoning but it's certainly not being a parrot

[–] rambaroo@lemmy.world 2 points 9 months ago* (last edited 9 months ago) (1 children)

Producing word salad really isn't that impressive. At least the art LLMs are somewhat impressive.

[–] Meowoem@sh.itjust.works 2 points 9 months ago

If you ask it to make up nonsense and it does it then you can't get angry lol. I normally use it to help analyse code or write sections of code, sometimes to teach me how certain functions or principles work - it's incredibly good at that, I do need to verify it's doing the right thing but I do that with my code too and I'm not always right either.

As a research tool it's great at taking a basic dumb description and pointing me to the right things to look for, especially for things with a lot of technical terms and obscure areas.

And yes they can occasionally make mistakes or invent things but if you ask properly and verify what you're told then it's pretty reliable, far more so than a lot of humans I know.

[–] Kecessa@sh.itjust.works 6 points 9 months ago* (last edited 9 months ago) (2 children)

The difference is that you can throw enough bad info at it that it will start paroting that instead of factual information because it doesn't have the ability to criticize the information it receives whereas an human can be told that the sky is purple with orange dots a thousand times a day and it will always point at the sky and tell you "No."

[–] c0mbatbag3l@lemmy.world -2 points 9 months ago (1 children)

To make the analogy actually comparable the human in question would need to be learning about it for the first time (which is analogous to the training data) and in that case you absolutely could convince the small child of that. Not only would they believe it if told enough times by an authority figure, you could convince them that the colors we see are different as well, or something along the lines of giving them bad data.

A fully trained AI will tell you that you're wrong if you told it the sky was orange, it's not going to just believe you and start claiming it to everyone else it interacts with. It's been trained to know the sky is blue and won't deviate from that outside of having its training data modified. Which is like brainwashing an adult human, in which case yeah you absolutely could have them convinced the sky is orange. We've got plenty of information on gaslighting, high control group and POW psychology to back that up too.

[–] Kecessa@sh.itjust.works 5 points 9 months ago* (last edited 9 months ago)

Feed LLMs all new data that's false and it will regurgitate it as being true even if it had previously been fed information that contradicts it, it doesn't make the difference between the two because there's no actual analysis of what's presented. Heck, even without intentionally feeding them false info, LLMs keep inventing fake information.

Feed an adult new data that's false and it's able to analyse it and make deductions based on what they know already.

We don't compare it to a child or to someone that was brainwashed because it makes no sense to do so and it's completely disingenuous. "Compare it to the worst so it has a chance to win!" Hell no, we need to compare it to the people that are references in their field because people will now be using LLMs as a reference!

[–] Meowoem@sh.itjust.works -4 points 9 months ago (2 children)

Ha ha yeah humans sure are great at not being convinced by the opinions of other people, that's why religion and politics are so simple and society is so sane and reasonable.

Helen Keller would belive you it's purple.

If humans didn't have eyes they wouldn't know the colour of the sky, if you give an ai a colour video feed of outside then it'll be able to tell you exactly what colour the sky is using a whole range of very accurate metrics.

[–] rambaroo@lemmy.world 4 points 9 months ago* (last edited 9 months ago) (1 children)

This is one of the worst rebuttals I've seen today because you aren't addressing the fact that the LLM has zero awareness of anything. It's not an intelligence and never will be without additional technologies built on top of it.

[–] Meowoem@sh.itjust.works 0 points 9 months ago

Why would I rebut that? I'm simply arguing that they don't need to be 'intelligent' to accurately determine the colour of the sky and that if you expect an intelligence to know the colour of the sky without ever seeing it then you're being absurd.

The way the comment I responded to was written makes no sense to reality and I addressed that.

Again as I said in other comments you're arguing that an LLM is not will smith in I Robot and or Scarlett Johansson playing the role of a usb stick but that's not what anyone sane is suggesting.

A fork isn't great for eating soup, neither is a knife required but that doesn't mean they're not incredibly useful eating utensils.

Try thinking of an LLM as a type of NLP or natural language processing tool which allows computers to use normal human text as input to perform a range of tasks. It's hugely useful and unlocks a vast amount of potential but it's not going to slap anyone for joking about it's wife.

[–] Kecessa@sh.itjust.works 3 points 9 months ago (1 children)

How come all LLMs keep inventing facts and telling false information then?

[–] Meowoem@sh.itjust.works 0 points 9 months ago

People do that too, actually we do it a lot more than we realise. Studies of memory for example have shown we create details that we expect to be there to fill in blanks and that we convince ourselves we remember them even when presented with evidence that refutes it.

A lot of the newer implementations use more complex methods of fact verification, it's not easy to explain but essentially it comes down to the weight you give different layers. GPT 5 is already training and likely to be out around October but even before that we're seeing pipelines using LLM to code task based processes - an LLM is bad at chess but could easily install stockfish in a VM and beat you every time.

load more comments (62 replies)