this post was submitted on 24 Aug 2024
37 points (63.9% liked)
Technology
59569 readers
3825 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each another!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
Approved Bots
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
I'm sorry. Because you don't understand how your brain works you're suggesting that it must work in the same way as something a similar brain created because you don't know how either thing works. That's not an argument.
No, I’m not suggesting that.
I’m suggesting that if we don’t even understand how consciousness works for ourselves, we cannot make claims about how it will look for other things.
Deterministically free will does not exist, if we cannot exercise free will we cannot have independent thoughts just the same as a machine.
Truth is we don’t really know shit, we’re biological machines that are able to think they’re in control of themselves based on inputs. If we ever discover true AGI it will be on accident as we fiddle with technologies such as LLMs or any other complex models.
Okay. Feed a new species that hasn't been named yet into an LLM. Does it name that new creature? Can it decide what family or phylum etc it belongs? Does it pick up the specific attributes of that new species?
It might be able to pick those things out, I certainly couldn't.
Edit: So ChatGPT correctly identified a new species from 4 days ago as a type of Storm Petrel and a new flower from Sri Lanka as an Orchidaceae. Far better than I could do.
That is very deliberately not in the spirit of the question I asked. It's almost like you're intent on misunderstanding on purpose just so you can feel like you're right.
You asked if it could do I task I wasn’t even capable of doing, and this was your assessment of consciousness.
No. I asked if it had been given an unclassified un-named species. Not something someone else just discovered and has already parsed information on. And the point is humans can and do do this, have done it for centuries with the right training as those systems we use for classification have been dialed in.
The model has the information on how to classify. It can be added to with scraped data from the internet. But it does not do the same things a trained individual does to classify and name a new species. Because it is not capable of that.
The information from 4 days was not parsed on, that’s why I chose something so recent.
And LLM can be trained to do this. Literally when it looked at the Petrel it did things humans do such as take note of the dark colours common in seabirds, the small size, etc. and it used those points to reach its conclusion.
We don’t do anything special as humans, we take in data, process it, and spit out a result. It’s why a child has to be taught basic concepts such as creativity or socialising.
Given nothing at all, could the LLM quantify or develop the tools and systems we use to categorize such species? Could it discover a species? The spirit of the question is, humans have been able to look at the world around them, using data we gain from our 5 senses and the scientific method to do this. The LLM cannot develop the same information gathering or classification, diagnostic, or scientific method skills in order to do the same. It relies solely on what we provide it and can only operate within those parameters. It does not have senses of its own. That's the point. Go look up how we have learned to quantify sapience. Because what you're saying is that you (a small data point out of trillions or more) can't do a thing a computer can do, so it must be able to think.
Exactly. If we give an LLM with no training data a large group of specimens, will it organize them into logical groups? Does it even understand the concept of organizing things into discrete groups?
That's something that's largely encoded into our brain structures due to millennia of evolution (or creation, take your pick) where such organization is advantageous. The LLM would only do it if we indicated that such organization is advantageous, and even then would only do it if we gave it a desired output. An LLM will only reflect the priorities of its creator, or at least the priorities baked in to the training data. It's not going to suggest that something else entirely be considered, because it only considers things from the lenses we give it.
Humans will question assumptions, will organize things without being prompted, and will generate our own priorities. I firmly believe an LLM cannot, and thus cannot be considered self-deterministic, and thus not sentient. All it can do is optimize for the priorities we give it, and while it may do that in surprising ways, that doesn't mean there's "thinking" going on, just that it's a complex system we don't fully understand (even if we created it). Maybe human brains work in a similar way (i.e. completely deterministic given a specific genome and "training data"), but we know LLMs work that way, so until we prove that humans work similarly, we cannot equate them. It's kind of like the P = NP question, we know LLMs are deterministic, we don't know if humans are. So the question isn't "can LLMs think" (we know they can't), but "can humans think."
So is a rock conscious? I guess we'll never know... But AI!?! Definitely conscious! smh.
I suspect others are talking about "thinking" only objectively.
B) If a LLM had a subjective experience when given input presumably it has none when all processes are stopped (subjectively, unverifiable).
A) If a LLM has no input then there are no processes going on at all which could be described as thinking (objectively, verifiable: what is the program doing).