this post was submitted on 24 Jul 2024
436 points (97.2% liked)

Technology

59534 readers
3223 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] Zos_Kia@lemmynsfw.com 2 points 4 months ago

Then these models are stupid

Yup that is kind of the point. They are math functions designed to approximate human tasks.

These models should start out with basics of language, so they don’t have to learn it from the ground up. That’s the next step. Right now they’re just well read idiots.

I'm not sure what you're pointing at here. How they do it right now, simplified, is you have a small model designed to cut text into tokens ("knowledge of syllables"), which are fed into a larger model which turns tokens into semantic information ("knowledge of language"), which is fed to a ridiculously fat model which "accomplishes the task" ("knowledge of things").

The first two models are small enough that they can be trained on the kind of data you describe, classic books, movie scripts etc... A couple hundred billion words maybe. But the last one requires orders of magnitude more data, in the trillions.