MentalEdge

joined 1 year ago
[–] MentalEdge@sopuli.xyz 5 points 3 months ago (13 children)

Being formal and considerate does not require being that much more verbose.

Do you really save time running messages through an LLM vs just writing them as you think of what to say?

[–] MentalEdge@sopuli.xyz 19 points 3 months ago (16 children)

Then just write that.

I don't understand why we're having AIs verboseify simple information?

Why do many word if few word do trick.

How long until we start using LLMs to summarize messages over-verbalized by LLMs?

And offloading the accounting for context WILL bite you in the ass. If you can't remember what a discussion was about and what needs considering, you're no longer doing the thinking.

[–] MentalEdge@sopuli.xyz 1 points 3 months ago* (last edited 3 months ago) (1 children)

You have no fucking idea what you're talking about. This isn't even a discussion, you're presenting your personal made-up fantasies as if they're real possibilites and ignoring anyone who points that out.

Shut the fuck up and go learn how LLMs work. I'm too fucking tired of explaining how completely delusional you are.

[–] MentalEdge@sopuli.xyz 1 points 3 months ago (1 children)
[–] MentalEdge@sopuli.xyz 1 points 3 months ago* (last edited 3 months ago) (1 children)

Are you serious? Start looking this stuff up instead of smugly acting like you can't possibly have guessed wrong.

One is literal living neurons, activated and read by electrodes. What exactly happens in the neurons is a complete mystery. I don't know, because NO ONE KNOWS. Neurons use so much more than simple on/off states, sending different electric and chemical signals with different lag-times with who-knows-what signaling purpose. Their structure is completely random with connections going around with seemingly no rime or reason, and we certainly don't control how exactly they grow.

Machine learning neurons are literally just arbitrary input-output nodes. How exactly they accept input and transform it can be coded to work however you like. And is. They don't simulate shit because we don't know exactly how biological neurons work. We run them using parallel processors like GPUs, but that still doesn't let us do something like whatever neurotransmitters do in a brain.

Additionally, they get arranged in sequential arrays of layers, where the overrall structure of the model is pre-determined in order to optimize for a given task before it even starts training. Brains don't do that. They just work. Somehow. The inter-connections in a brain are orders of magnitude more complex and they form on their own.

The way they learn is completely different. Machine learning models are trained by "evolving" them, creating a thousand mutations, then seeing which one works best, then repeating that for that model while deleting the rest.

With neurons, it just works. You don't need a million iterations to get ONE that works. And we don't know HOW brains do that.

Also, fuck you, I'm blocking you now. Go learn this stuff properly before you open your mouth again. You're a misinformed fool. Stop being one.

[–] MentalEdge@sopuli.xyz 356 points 3 months ago* (last edited 3 months ago) (2 children)

Hell yes.

It's fucking open source, this is no different from games with intrusive anti-cheat refusing to run on Linux, except in this case it's not even a different OS.

It's monopolistic and anti-user.

[–] MentalEdge@sopuli.xyz 1 points 3 months ago* (last edited 3 months ago) (3 children)

Right...

As if critical thinking is super easy, basic stuff, that humans get right every time without even trying. You actually think getting a computer to do it would be easier than making the AGI?

You are VERY confused about how thinking works.

[–] MentalEdge@sopuli.xyz 1 points 3 months ago (5 children)

Because how could a piece of code that can do that, not already be AGI? It would have to be able to understand EVERYTHING, and do so PERFECTLY.

Only AGI could comprehend and filter input data that well. Nothing less would be enough. How could it be?

[–] MentalEdge@sopuli.xyz 1 points 3 months ago (3 children)

Of course, the training data contains all that information, and the LLM is able to explain it in a thousand different ways until anyone can understand it.

But flip that around.

You could never explain a brand new concept to an LLM which isn't already contained somewhere in its training data. You can't just give it a book about a new thing, or have a conversation about it, and then have it understand it.

A single book isn't enough. It needs terabytes of redundant examples and centuries of cpu-time to model the relevant concepts.

Where a human can read a single physics book, and then write part 2 that re-explains and perhaps explores new extrapolated phenomenon, an LLM cannot.

Write a completely new OS that works in a completely new way, and there is no way you could ever get an LLM to understand it by just talking to it. To train it, you'd need to produce those several terabytes of training data about it, first.

And once you do, how do you know it isn't just pseudo-plagiarizing the contents of that training data?

[–] MentalEdge@sopuli.xyz 2 points 3 months ago (3 children)

Chips with actual biological neurons are in no way equivalent to the neural networks constructed for machine learning applications.

Do not confuse the two.

[–] MentalEdge@sopuli.xyz 2 points 3 months ago* (last edited 3 months ago) (1 children)

It's good that you know you base your claims on literally nothing, but you should really look into how this stuff actually works right now before you start publicly speculating on what you misguidedly think it might be able to achieve.

[–] MentalEdge@sopuli.xyz 1 points 3 months ago* (last edited 3 months ago) (7 children)

the learning algorithm just needs to be improved to be able to filter input like human brain filter

You're suggesting that all we need to do is "tweak the code a little" so it's already capable of human-level critical thinking before it even starts training?

You're basically saying that all we need to make an AGI using machine learning, is an already functioning AGI.

view more: ‹ prev next ›