this post was submitted on 22 Jul 2025
393 points (96.9% liked)
Technology
73139 readers
3651 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related news or articles.
- Be excellent to each other!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
- Check for duplicates before posting, duplicates may be removed
- Accounts 7 days and younger will have their posts automatically removed.
Approved Bots
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Chatgpt and other llms aren't smart at all. They just parrot out what is fed into them.
While that is sort of true, it's only about half of how they work. An LLM that isn't trained with reinforcement learning to give desired outputs gives really weird results. Ever notice how ChatGPT seems aware that it is a robot and not a human? An LLM that purely parrots the training corpus won't do that. If you ask it "are you a robot?" It will say "Of course not dumbass I'm a real human I had to pass a CAPTCHA to get on this website" because that's how people respond to that question. So you get a bunch of poorly paid Indians in a call center to generate and rank responses all day and these rankings get fed into the algorithm for generating a new response. One thing I am interested in is the fact that all these companies are using poorly paid people in the third world to do this part of the development process, and I wonder if this imparts subtle cultural biases. For example, early on after ChatGPT was released I found it had an extremely strong taboo against eating dolphin meat, to the extent that it was easier to get it to write about about eating human meat than dolphin meat. I have no idea where this could have come from but my guess is someone really hated the idea and spent all day flagging dolphin meat responses as bad.
Anyway, this is another, more subtle way more subtle issue with LLMs- they don't simply respond with the statistically most likely outcome of a conversation, there is a finger in the scales in favor of certain responses, and that finger can be biased in ways that are not only due to human opinion, but also really hard to predict.