this post was submitted on 13 Jun 2024
522 points (95.2% liked)

Memes

45704 readers
1278 users here now

Rules:

  1. Be civil and nice.
  2. Try not to excessively repost, as a rule of thumb, wait at least 2 months to do it if you have to.

founded 5 years ago
MODERATORS
 
you are viewing a single comment's thread
view the rest of the comments
[–] chayleaf@lemmy.ml 1 points 5 months ago (1 children)

i'm not talking about knowing about how humans perceive/learn languages, i'm talking about language structure. Perhaps it's wrong to call it "how languages work"

[–] jsomae@lemmy.ml 1 points 5 months ago (1 children)

That's what I meant, yes. They're not built based on any linguistic field

[–] chayleaf@lemmy.ml 2 points 5 months ago* (last edited 5 months ago)

different neural network types excel at different tasks - image recognition was invented way before LLMs, not only for lack of processing power, but also because the previous architectures didn't work with languages. New architectures don't appear out of thin air, they are created with a rough idea of what we could need to make the network do a certain task (e.g. NLP) better. Even tokenization isn't blind codepoint separation but is based on an analysis of languages. But yes, natural languages aren't "parsed" for neural networks, they don't even have a formal grammar.