Perspectivist

joined 1 week ago
[–] Perspectivist@feddit.uk 1 points 1 minute ago

EU is about to do the exact same thing. Norway is the place to be. That's where I went - at least according to my ip address.

[–] Perspectivist@feddit.uk 1 points 16 hours ago (1 children)

FUD has nothing to do with what this is about.

[–] Perspectivist@feddit.uk 14 points 20 hours ago (3 children)

And nothing of value was lost.

Sure, if privacy is worth nothing to you but I wouldn't speak for the rest of the UK and EU.

[–] Perspectivist@feddit.uk 10 points 21 hours ago

My feed right now.

[–] Perspectivist@feddit.uk -2 points 1 day ago

It’s actually the opposite of a very specific definition - it’s an extremely broad one. “AI” is the parent category that contains all the different subcategories, from the chess opponent on an old Atari console all the way up to a hypothetical Artificial Superintelligence, even though those systems couldn’t be more different from one another.

[–] Perspectivist@feddit.uk -3 points 1 day ago

It’s a system designed to generate natural-sounding language, not to provide factual information. Complaining that it sometimes gets facts wrong is like saying a calculator is “stupid” because it can’t write text. How could it? That was never what it was built for. You’re expecting general intelligence from a narrowly intelligent system. That’s not a failure on the LLM’s part - it’s a failure of your expectations.

[–] Perspectivist@feddit.uk 11 points 1 day ago

There are plenty of similarities in the output of both the human brain and LLMs, but overall they’re very different. Unlike LLMs, the human brain is generally intelligent - it can adapt to a huge variety of cognitive tasks. LLMs, on the other hand, can only do one thing: generate language. It’s tempting to anthropomorphize systems like ChatGPT because of how competent they seem, but there’s no actual thinking going on. It’s just generating language based on patterns and probabilities.

[–] Perspectivist@feddit.uk 54 points 1 day ago (7 children)

Large language models aren’t designed to be knowledge machines - they’re designed to generate natural-sounding language, nothing more. The fact that they ever get things right is just a byproduct of their training data containing a lot of correct information. These systems aren’t generally intelligent, and people need to stop treating them as if they are. Complaining that an LLM gives out wrong information isn’t a failure of the model itself - it’s a mismatch of expectations.

[–] Perspectivist@feddit.uk 3 points 6 days ago (1 children)

I haven’t claimed that it is. The point is, the only two plausible scenarios I can think of where we don’t eventually reach AGI are: either we destroy ourselves before we get there, or there’s something fundamentally mysterious about the biological computer that is the human brain - something that allows it to process information in a way we simply can’t replicate any other way.

I don’t think that’s the case, since both the brain and computers are made of matter, and matter obeys the laws of physics. But it’s at least conceivable that there could be more to it.

[–] Perspectivist@feddit.uk 4 points 6 days ago (3 children)

Did you genuinely not understand the point I was making, or are you just being pedantic? "Silicon" obviously refers to current computing substrates, not a literal constraint on all future hardware. If you’d prefer I rewrite it as "in non-biological substrates," I’m happy to oblige - but I have a feeling you already knew that.

view more: next ›