this post was submitted on 09 May 2024
85 points (85.7% liked)

Technology

59569 readers
3825 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
 

A new paper suggests diminishing returns from larger and larger generative AI models. Dr Mike Pound discusses.

The Paper (No "Zero-Shot" Without Exponential Data): https://arxiv.org/abs/2404.04125

you are viewing a single comment's thread
view the rest of the comments
[–] chrash0@lemmy.world 3 points 6 months ago (1 children)

gotem!

seriously tho, you don’t think OpenAI is tracking this? architecural improvements and training strategies are developing all the time

[–] barsoap@lemm.ee 7 points 6 months ago (1 children)

...and aren't making progress on that front: A linear increase in generalisation still requires a more than linear increase in amount of data.

Also it's not btw that we wouldn't know that our current architectures won't lead to proper intelligence, tl:dr: While current architectures can learn, and represent information, they cannot develop learning strategies or decide smartly on how to represent a particular bit of information. All the improvement that are happening are on that "how to learn better" area, we have no idea whatsoever how to make the jump on how to teach an AI to learn how to learn. AlphaZero is able to learn rules of a game, yes, but it can't learn arbitrary information -- once you throw something other than a game at it it has no idea how to make sense of anything.

[–] chrash0@lemmy.world -1 points 6 months ago (1 children)

“we don’t know how” != “it’s not possible”

i think OpenAI more than anyone knows the challenges with scaling data and training. anyone working on AI knows the line: “a baby can learn to recognize elephants from a single instance”. reducing training data and time is fundamental to advancement. don’t get me wrong, it’s great to put numbers to these things. i just don’t think this paper is super groundbreaking or profound. a bit clickbaity and sensational for Computerphile

[–] barsoap@lemm.ee 7 points 6 months ago

...and a baby doesn't use the same architecture, not even close, as generative AIs. Babies are T3 systems, they aren't simply systems which have rules on how to learn, they are systems which have rules on how to develop learning strategies that they then use to learn.

I'm not doubting, in the slightest, that AI can't get there: It's definitely possible. It's just not possible with the current approaches, and the iterative refinements that "oh OpenAI is constantly coming up with new topologies" refers to is just more of the same. Show me a topology that can come up with topologies, then we'll have a chance to break through the need for exponential amounts of data.