this post was submitted on 10 May 2024
340 points (98.6% liked)

Technology

59534 readers
3209 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
top 28 comments
sorted by: hot top controversial new old
[–] huginn@feddit.it 75 points 6 months ago (3 children)

This is exactly what I'm talking about when I argue with people who insist that an LLM is super complex and totally is a thinking machine just like us.

It's nowhere near the complexity of the human brain. We are several orders of magnitude more complex than the largest LLMs, and our complexity changes with each pulse of thought.

The brain is amazing. This is such a cool image.

[–] echodot@feddit.uk 15 points 6 months ago* (last edited 6 months ago) (2 children)

LLM'S don't work like the human brain, you are comparing apples to suspension bridges.

The human brain works by the series of interconnected nodes and complex chemical interactions, LLM's work on multi-dimensional search spaces, their brains exist in 15 billion spatial dimensions. Yours doesn't, you can't compare the two and come up with any kind of meaningful comparison. All you can do is challenge it against human level tasks and see how it stacks up. You can't estimate it from complexity.

[–] CrayonRosary@lemmy.world 15 points 6 months ago (1 children)

LLM's work on multi-dimensional search spaces

You're missing half of it. The data cube is just for storing and finding weights. Those weights are then loaded into the nodes of a neural network to do the actual work. The neural network was inspired by actual brains.

[–] thechadwick@lemmy.world 1 points 6 months ago (1 children)

I wonder where it got it's name from?

[–] CrayonRosary@lemmy.world 1 points 6 months ago

I have no idea. Maybe someone with a larger neural network than mine can figure it out.

[–] AliasAKA@lemmy.world 3 points 6 months ago

I mean you can model a neuronal activation numerically, and in that sense human brains are remarkably similar to hyper dimensional spatial computing devices. They’re arguably higher dimensional since they don’t just integrate over strength of input but physical space and time as well.

[–] Khanzarate@lemmy.world 9 points 6 months ago (1 children)

I think of LLMs like digital bugs, doing their thing, basically programmed.

They're just programmed with virtual life experience instead of a traditional programmer.

[–] echodot@feddit.uk 3 points 6 months ago (2 children)

Back in the early 2000s CERN was able to simulate the brain of a flat worm. Actually simulate the individual neurons firing. A 100% digital representation of a flatworm brain. And it took up an immense amount of processing capacity for a form of life that basic, far more processor intensive than the most advanced AIs we currently have.

Modern AIs don't bother to simulate brains, they do something completely different. So you really can't compare them to anything organic.

[–] pm_me_your_titties@lemmy.world 4 points 6 months ago* (last edited 6 months ago)

https://www.sciencealert.com/scientists-put-worm-brain-in-lego-robot-openworm-connectome

2014, not early 2000s (unless you were talking about the century or something).

OpenWorm project, not CERN.

And it was run on Lego Mindstorm. I am no AI expert, but I am fairly certain that it is not "far more processor intensive than the most advanced AIs we currently have".

Citation needed on that comment of yours. Because I know for a fact that what I said is true. Go look it up.

Maybe you should be a little less sure of your "facts", and listen to what the world has to teach you. It can be marvelous.

[–] YIj54yALOJxEsY20eU@lemm.ee 2 points 6 months ago* (last edited 6 months ago) (1 children)

far more processor intensive than the most advanced AIs we currently have

This is the second comment I've seen from you where you confidently say something incorrect. Maybe stop trying to be orator of the objective and learn a little more first.

[–] echodot@feddit.uk -2 points 6 months ago (1 children)

Citation needed on that comment of yours. Because I know for a fact that what I said is true. Go look it up.

[–] YIj54yALOJxEsY20eU@lemm.ee 1 points 6 months ago

I think the claim that 24 year old technology is more computationally intensive than the ground breaking tech of the modern day needs the citation.

[–] AEsheron@lemmy.world 3 points 6 months ago (1 children)

I agree, but it isn't so clear cut. Where is the cutoff on complexity required? As it stands, both our brains and most complex AI are pretty much black boxes. It's impossible to say this system we know vanishingly little about is/isn't dundamentally the same as this system we know vanishingly little about, just on a differentscale. The first AGI will likely still have most people saying the same things about it, "it isn't complex enough to approach a human brain." But it doesn't need to equal a brain to still be intelligent.

[–] huginn@feddit.it 4 points 6 months ago* (last edited 6 months ago)

but it isn’t so clear cut

It's demonstrably several orders of magnitude less complex. That's mathematically clear cut.

Where is the cutoff on complexity required?

Philosophical question without an answer - We do know that it's nowhere near the complexity of the brain.

both our brains and most complex AI are pretty much black boxes.

There are many things we cannot directly interrogate which we can still describe.

It’s impossible to say this system we know vanishingly little about is/isn’t dundamentally the same as this system we know vanishingly little about, just on a differentscale

It's entirely possible to say that because we know the fundamental structures of each, even if we don't map the entirety of eithers complexity. We know they're fundamentally different - Their basic behaviors are fundamentally different. That's what fundamentals are.

The first AGI will likely still have most people saying the same things about it, “it isn’t complex enough to approach a human brain.”

Speculation but entirely possible. We're nowhere near that though. There's nothing even approaching intelligence in LLMs. We've never seen emergent behavior or evidence of an id or ego. There's no ongoing thought processes, no rationality - because that's not what an LLM is. An LLM is a static model of raw text inputs and the statistical association thereof. Any "knowledge" encoded in an LLM exists entirely in the encoding - It cannot and will not ever generate anything that wasn't programmed into it.

It's possible that an LLM might represent a single, tiny, module of AGI in the future. But that module will be no more the AGI itself than you are your cerebellum.

But it doesn’t need to equal a brain to still be intelligent.

First thing I think we agree on.

[–] scarilog@lemmy.world 31 points 6 months ago (1 children)

The 3D map covers a volume of about one cubic millimetre, one-millionth of a whole brain, and contains roughly 57,000 cells and 150 million synapses — the connections between neurons. It incorporates a colossal 1.4 petabytes of data.

Assuming this means the total data of the map is 1.4 petabytes. Crazy to think that mapping of the entire brain will probably happen within the next century.

[–] LostXOR@fedia.io 28 points 6 months ago

If one millionth of the brain is 1.4 petabytes, the whole brain would take 1.4 zettabytes of storage, roughly 4% of all the digital data on Earth.

[–] nicerdicer2@sh.itjust.works 19 points 6 months ago (1 children)

There is an eerie resemblence between the smallest neuron and the largest structure in the universe - Galaxy Filament

[–] 3ntranced@lemmy.world 1 points 6 months ago

I mean realistically we could just be a manifested thought of some higher being who took too big a toke of some 5-D Weed

[–] enbyecho@lemmy.world 13 points 6 months ago

Aha! This is why I can't think straight! Spaghetti!

[–] neuracnu@lemmy.blahaj.zone 9 points 6 months ago (1 children)

That cable management is horrendous. Pull them out.

[–] JATtho@lemmy.world 2 points 6 months ago

But it's the spaghetti cabling that makes it work and highly robust.

[–] beefbot@lemmy.blahaj.zone 7 points 6 months ago

Noam Chomsky said “we don’t know what happens when you cram 10^5 neurons* into a space the size of a basketball” - but what little we know is astonishing & a marvel

*whatever the number is

[–] Shawdow194@kbin.social 7 points 6 months ago (3 children)
[–] jaycifer@lemmy.world 12 points 6 months ago* (last edited 6 months ago) (1 children)

Humbling? That’s going on in my head. I’m that complicated! Or at least the “hardware” I run on is. I think having a brain that beautifully complex is more empowering than anything! I wonder what new discoveries will stem from this.

[–] BearOfaTime@lemm.ee 8 points 6 months ago

Por que no los dos?

I can see both sides:

Super humbling because nature's complexity can provide data storage and retrieval capacity several orders or magnitude greater than the best we can do right now.

Also super exciting because look at what every brain on the planet is composed of, and how it functions, in a freakin' square millimeter!

Crazy stuff. Wild.

[–] Morphit@feddit.uk 2 points 6 months ago

Let's see Paul Allen's brain scan.

[–] moistclump@lemmy.world 2 points 6 months ago

There’s a whole universe in there eh?

I thought this was a close up of a fuzzy sweater and was like: "cool ". Read the title. "Oh, fuck, yeah."