this post was submitted on 28 Mar 2026
271 points (96.9% liked)

Technology

83150 readers
3487 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] FosterMolasses@leminal.space 12 points 3 hours ago (2 children)

“Every time you’re talking, the model gets fine-tuned. It knows exactly what you like and what you want to hear. It praises you a lot.”

See, I never understood this. Mine could never even follow simple instructions lol

Like I say "Give me a list of types of X, but exclude Y"

"Understood!

#1 - Y

(I know you said to exclude this one but it's a popular option among-)"

lmfaoooo

[–] phoenixz@lemmy.ca 3 points 1 hour ago

I've experimented with chatbots to see their capabilities to develop small bits and pieces of code, and every friggin time, the first thing I have to say is "shut up, keep to yourself, I want short, to the point replies"because the complimenting is so "whose a good boy!!!!" annoying.

People don't talk like these chatbots do, their training data that was stolen from humanity definitely doesn't contain that, that is "behavior" included by the providers to try and make sure that people get as hooked as possible

Gotta make back those billions of investments on a dead end technology somehow

[–] very_well_lost@lemmy.world 7 points 2 hours ago

That's because it isn't true. Retraining models is expensive with a capital E, so companies only train a new model once or twice a year. The process of 'fine-tuning' a model is less expensive, but the cost is still prohibitive enough that it does not make sense to fine-tune on every single conversation. Any 'memory' or 'learning' that people perceive in LLMs is just smoke and mirrors. Typically, it looks something like this:

-You have a conversation with a model.

-Your conversation is saved into a database with all of the other conversations you've had. Often, an LLM will be used to 'summarize' your conversation before it's stored, causing some details and context to be lost.

-You come back and have a new conversation with the same model. The model no longer remembers your past conversations, so each time you prompt it, it searches through that database for relevant snippets from past (summarized) conversations to give the illusion of memory.