No matter what you call it, an LLM will always produces the same output with the same input if it is at the same state.
How do you know a human wouldn't do the same? We lack the ability to perform the experiment.
An LLM will never say “I don’t know” unless it’s been trained to say “I don’t know”
Also a very human behaviour, in my experience.
I did some playing around with ChatGPT's understanding of jokes a while back and I found that it actually did best on understanding puns, which IMO isn't surprising since it's a large language model and puns are deeply rooted in language and wordplay. It didn't so so well at jokes based on other things but it still sometimes managed to figure them out too.
I remember discussing the subject in a Reddit thread and there was a commenter who was super enthused by the notion of an AI that understood humour because he himself was autistic and never "got" any jokes. He wanted an AI companion that would let him at least know when a joke was being said, so he wouldn't get confused and flustered. I had to warn him that ChatGPT wasn't reliable for that yet, but still, it did better than he did and he was fully human.