Right...
As if critical thinking is super easy, basic stuff, that humans get right every time without even trying. You actually think getting a computer to do it would be easier than making the AGI?
You are VERY confused about how thinking works.
Right...
As if critical thinking is super easy, basic stuff, that humans get right every time without even trying. You actually think getting a computer to do it would be easier than making the AGI?
You are VERY confused about how thinking works.
Because how could a piece of code that can do that, not already be AGI? It would have to be able to understand EVERYTHING, and do so PERFECTLY.
Only AGI could comprehend and filter input data that well. Nothing less would be enough. How could it be?
Of course, the training data contains all that information, and the LLM is able to explain it in a thousand different ways until anyone can understand it.
But flip that around.
You could never explain a brand new concept to an LLM which isn't already contained somewhere in its training data. You can't just give it a book about a new thing, or have a conversation about it, and then have it understand it.
A single book isn't enough. It needs terabytes of redundant examples and centuries of cpu-time to model the relevant concepts.
Where a human can read a single physics book, and then write part 2 that re-explains and perhaps explores new extrapolated phenomenon, an LLM cannot.
Write a completely new OS that works in a completely new way, and there is no way you could ever get an LLM to understand it by just talking to it. To train it, you'd need to produce those several terabytes of training data about it, first.
And once you do, how do you know it isn't just pseudo-plagiarizing the contents of that training data?
Chips with actual biological neurons are in no way equivalent to the neural networks constructed for machine learning applications.
Do not confuse the two.
It's good that you know you base your claims on literally nothing, but you should really look into how this stuff actually works right now before you start publicly speculating on what you misguidedly think it might be able to achieve.
the learning algorithm just needs to be improved to be able to filter input like human brain filter
You're suggesting that all we need to do is "tweak the code a little" so it's already capable of human-level critical thinking before it even starts training?
You're basically saying that all we need to make an AGI using machine learning, is an already functioning AGI.
I think LLM is a part of the human mind very similar to the one we have on PCs
You think? So you base this on no studies or evidence?
In LLM we simulate the chemical properties of the neurones using math.
No, we don't. A machine learning node accepts inputs, which it processes into one or multiple outputs. But literally no part of how the virtual neuron functions is based on or limited to what we THINK human neurons do.
And we have already prototype of chips that work with lab grown brain tissue that show very efficient training capabilities in machine learning (it already plays pong)
Using actual biological neurons for computing is a completely separate field of study with almost no overlap with machine learning.
Stop pulling shit out your ass.
The "how do you know humans don't work the way machine learning does" is the wrong side of the argument. You should be explaining why you think LLMs work like humans.
Even as LLMs solve thinking problems, there is little evidence they do so the same way humans do, as they can't seem to solve issues that aren't included in their training data
Humans absolutely can and do solve new and novel problems without prior experience of the logic involved. LLMs can't seem to pull that off.
Such a software construct would look nothing like an LLM. We'd need something that matches the complexity and capabilities of a human brain before it's even been given anything to learn from.
Hardly.
How did you interpret the issues inherent in the structure of how LLMs work to be a hardware problem?
An AGI should be able to learn the basics of physics from a single book, the way a human can. But LLMs need terabytes of data to even get started, and once trained, adding to their knowledge by simply telling them things doesn't actually integrate that information into the model itself in any way.
Even if your tried to make it work that way, it wouldn't work, because a single sentence can't significantly alter the model to match the way humans can internalise a concept being communicated to them in a single conversation.
Hell yes.
It's fucking open source, this is no different from games with intrusive anti-cheat refusing to run on Linux, except in this case it's not even a different OS.
It's monopolistic and anti-user.