1863
this post was submitted on 23 May 2024
1863 points (98.6% liked)
Technology
59605 readers
4202 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each another!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
Approved Bots
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
I remember doing ghetto text generation in my NLP (Natural Language Processing) class, and the logic was basically this:
This is a rough explanation of Baysian nets, which I think are what's used in LLMs. We used a very simple n-gram model (e.g. n words are considered for the statistics, e.g. "to my math" is much more likely to generate "class" than "homework"), but they're probably doing fancy things with text categorization and whatnot to generate more relevant text.
The LLM isn't really "thinking" here, it's just associating input text and the training data to generate output text.
Sounds quite similar to Markov chains which made me think of this story:
https://thedailywtf.com/articles/the-automated-curse-generator
Still gets a snort out of me every time Markov chains are mentioned.
Yup, and I'm guessing LLMs use Markov chains, which are also a really old concept (the idea is >100 years old, and it's used in compression algorithms like LZMA).
Yeah I'm not an AI expert, or even really someone who studies it as my primary role. But my understanding is that part of the "innovation" of modern LLMs is that they generate tokens, which are not necessarily full words, but simply small linguistic units. So basically with enough training the model can learn to predict the most likely next couple of characters and the words just generate themselves.
I haven't looked too much into it either, but from that very brief description, it sounds like that would help to mostly make it sound more natural by abstracting a bit over word roots and considering grammar structures, without actually baking those into the model as logic.
AI text does read pretty naturally, so hopefully my interpretation is correct. But it's also very verbose, and can repeat itself a lot.
Most LLMs are transformers, in fact GPT stands for Generative Pre-trained Transformer. They are a different to Bayesian networks as transformers are not state machines, but rather assign importance according to learned attention based on their training. The main upside of this approach is scalability because it can be easily parallelized due to not relying on states.