Perspectivist

joined 1 week ago
[–] Perspectivist@feddit.uk 2 points 5 hours ago (1 children)

Judging by the comments here I'm getting the impression that people would like to rather provide a selfie or ID.

[–] Perspectivist@feddit.uk 1 points 23 hours ago

No other than it's geographically closer to my actual location so I thought the speed would be faster.

[–] Perspectivist@feddit.uk 38 points 1 day ago (7 children)

EU is about to do the exact same thing. Norway is the place to be. That's where I went - at least according to my ip address.

[–] Perspectivist@feddit.uk 1 points 1 day ago (1 children)

FUD has nothing to do with what this is about.

[–] Perspectivist@feddit.uk 17 points 1 day ago (3 children)

And nothing of value was lost.

Sure, if privacy is worth nothing to you but I wouldn't speak for the rest of the UK and EU.

[–] Perspectivist@feddit.uk 10 points 1 day ago

My feed right now.

[–] Perspectivist@feddit.uk -2 points 2 days ago

It’s actually the opposite of a very specific definition - it’s an extremely broad one. “AI” is the parent category that contains all the different subcategories, from the chess opponent on an old Atari console all the way up to a hypothetical Artificial Superintelligence, even though those systems couldn’t be more different from one another.

[–] Perspectivist@feddit.uk -3 points 2 days ago

It’s a system designed to generate natural-sounding language, not to provide factual information. Complaining that it sometimes gets facts wrong is like saying a calculator is “stupid” because it can’t write text. How could it? That was never what it was built for. You’re expecting general intelligence from a narrowly intelligent system. That’s not a failure on the LLM’s part - it’s a failure of your expectations.

[–] Perspectivist@feddit.uk 12 points 2 days ago

There are plenty of similarities in the output of both the human brain and LLMs, but overall they’re very different. Unlike LLMs, the human brain is generally intelligent - it can adapt to a huge variety of cognitive tasks. LLMs, on the other hand, can only do one thing: generate language. It’s tempting to anthropomorphize systems like ChatGPT because of how competent they seem, but there’s no actual thinking going on. It’s just generating language based on patterns and probabilities.

[–] Perspectivist@feddit.uk 56 points 2 days ago (7 children)

Large language models aren’t designed to be knowledge machines - they’re designed to generate natural-sounding language, nothing more. The fact that they ever get things right is just a byproduct of their training data containing a lot of correct information. These systems aren’t generally intelligent, and people need to stop treating them as if they are. Complaining that an LLM gives out wrong information isn’t a failure of the model itself - it’s a mismatch of expectations.

view more: next ›