UraniumBlazer

joined 1 year ago
[–] UraniumBlazer@lemm.ee -1 points 3 months ago (2 children)

ChatGPT says this itself. However, why does an intention have to be made by ChatGPT itself? Our intentions are often trained into us by others. Take the example of propaganda. Political propaganda, corporate propaganda (advertisements) and so on.

[–] UraniumBlazer@lemm.ee 1 points 3 months ago (1 children)

It's just because AI stuff is overhyped pretty much everywhere as a panacea to solve all ~~capitalist~~ ails. Seems every other article, no matter the subject or demographic, is about how AI is changing/ruining it.

Agreed :(

You know what's sad? Communities that look at this from a neutral, objective position (while still being fun) exist on Reddit. I really don't want to keep using it though. But I see nothing like that on Lemmy.

[–] UraniumBlazer@lemm.ee -3 points 3 months ago (1 children)

No, that definition does not exclude dogs or apes. Both are significantly more intelligent than an LLM.

Again, depends on what type of intelligence we are talking about. Dogs can't write code. Apes can't write code. LLMs can (not bad code in my experience for low level tasks). Dogs can't summarize huge pages of text. Heck, they can't even have a vocab greater than a few thousand words. All of this definitely puts LLMs above dogs n apes in the scale of intelligence.

Pseudo-intellectual bullshit like this being spread as adding to the discussion does meaningful harm. It's inherently malignant, and deserves to be treated with the same contempt as flat earth and fake medicine should be.

Your comments are incredibly reminiscent of self righteous Redditors. U make bold claims without providing any supporting explanation. Could you explain how any of this is pseudoscience? How does any of this not follow the scientific method? How is it malignant?

[–] UraniumBlazer@lemm.ee -1 points 3 months ago (3 children)

A conscious system has to have some baseline level of intelligence that's multiple orders of magnitude higher than LLMs have.

Does it? By that definition, dogs aren't conscious. Apes aren't conscious. Would you say they both aren't self aware?

If you're entertained by an idiot "persuading" something less than an idiot, whatever. Go for it.

Why the toxicity? U might disagree with him, sure. Why go further and berate him?

[–] UraniumBlazer@lemm.ee -5 points 3 months ago (11 children)

Exactly. Which is what makes this entire thing quite interesting.

Alex here (the interrogator in the video) is involved in AI safety research. Questions like "do the ethical frameworks of AI match those of humans", "how do we get AI to not misinterpret inputs and do something dangerous" are very important to be answered.

Following this comes the idea of consciousness. Can machine learning models feel pain? Can we unintentionally put such models into immense eternal pain? What even is the nature of pain?

Alex demonstrated that ChatGPT was lying intentionally. Can it lie intentionally for other things? What about the question of consciousness itself? Could we build models that intentionally fail the Turing test? Should we be scared of such a possibility?

Questions like these are really interesting. Unfortunately, they are shot down immediately on Lemmy, which is pretty disappointing.

view more: ‹ prev next ›