FaceDeer

joined 1 year ago
[–] FaceDeer@kbin.social 7 points 10 months ago

I did some playing around with ChatGPT's understanding of jokes a while back and I found that it actually did best on understanding puns, which IMO isn't surprising since it's a large language model and puns are deeply rooted in language and wordplay. It didn't so so well at jokes based on other things but it still sometimes managed to figure them out too.

I remember discussing the subject in a Reddit thread and there was a commenter who was super enthused by the notion of an AI that understood humour because he himself was autistic and never "got" any jokes. He wanted an AI companion that would let him at least know when a joke was being said, so he wouldn't get confused and flustered. I had to warn him that ChatGPT wasn't reliable for that yet, but still, it did better than he did and he was fully human.

[–] FaceDeer@kbin.social 8 points 10 months ago (5 children)

No matter what you call it, an LLM will always produces the same output with the same input if it is at the same state.

How do you know a human wouldn't do the same? We lack the ability to perform the experiment.

An LLM will never say “I don’t know” unless it’s been trained to say “I don’t know”

Also a very human behaviour, in my experience.

[–] FaceDeer@kbin.social 0 points 10 months ago (6 children)

I have a theory... so are you and I.

[–] FaceDeer@kbin.social 4 points 10 months ago (14 children)

I've been saying this all along. Language is how humans communicate thoughts to each other. If a machine is trained to "fake" communication via language then at a certain point it may simply be easier for the machine to figure out how to actually think in order to produce convincing output.

We've seen similar signs of "understanding" in the image-generation AIs, there was a paper a few months back about how when one of these AIs is asked to generate a picture the first thing it does is develop an internal "depth map" showing the three-dimensional form of the thing it's trying to make a picture of. Because it turns out that it's easier to make pictures of physical objects when you have an understanding of their physical nature.

I think the reason this gets a lot of pushback is that people don't want to accept the notion that "thinking" may not actually be as hard or as special as we like to believe.

[–] FaceDeer@kbin.social 6 points 10 months ago (4 children)

So complain about that, the thing that is actually a problem for you.

[–] FaceDeer@kbin.social 4 points 10 months ago

It also turns on and off outside of any human control.

[–] FaceDeer@kbin.social 11 points 10 months ago (9 children)

Or stop whinging about how the hardware isn't the perfect platonic ideal that you imagined and use it when it's good enough.

Seriously, what's the big deal about a battery pack?

[–] FaceDeer@kbin.social 2 points 10 months ago (2 children)

Been workshopping this with my local AI and one of the better names it came up with was "Floralion."

[–] FaceDeer@kbin.social 10 points 10 months ago (1 children)

There's also koboldcpp, which is fairly newbie friendly.

[–] FaceDeer@kbin.social 26 points 10 months ago (5 children)

This is the sort of thing that I like to send to people who assure me that "all AI generated art looks wrong" or whatever.

No, the AI generated art that looks wrong is the only AI generated art that you notice. The rest slips by.

[–] FaceDeer@kbin.social 10 points 10 months ago* (last edited 10 months ago)

In this case, the landing legs were on the "side" of the probe. It was supposed to come down to a halt hovering right above the surface and then flop over onto them.

view more: ‹ prev next ›