47
If robots could lie, would we be okay with it? A new study throws up intriguing results.
(theconversation.com)
This is a most excellent place for technology news and articles.
Bullshitting implies intention to do so. LLMs make mistakes, just like humans.
An LLMs "intent" is always to give you a plausible response even if it doesn't have the "knowledge". The same behaviour in a human would be classed as lying IMHO.
But you wouldn't call it lying if a person tells you something they think is true but turns out to be false. Lying means intentionally giving out false information. LLMs don't have intentions.
Yeah I think it's more fitting to use the term bullshitting.
LLMs actually know that some of their answers have low probability to be the right ones, they give them out regardless, and don't mention the low confidence of it.