this post was submitted on 22 Jul 2025
383 points (96.8% liked)

Technology

73071 readers
2447 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] rizzothesmall@sh.itjust.works 15 points 1 day ago (3 children)

Bias of training data is a known problem and difficult to engineer out of a model. You also can't give the model context access to other people's interactions for comparison and moderation of output since it could be persuaded to output the context to a user.

Basically the models are inherently biased in the same manner as the content they read in order to build their data, based on probability of next token appearance when formulating a completion.

"My daughter wants to grow up to be" and "My son wants to grow up to be" will likewise output sexist completions because the source data shows those as more probable outcomes.

[–] flamingo_pinyata@sopuli.xyz 11 points 1 day ago (2 children)

Humans suffer from the same problem. Racism and sexism are consequences of humans training on a flawed dataset, and overfitting the model.

[–] x00z@lemmy.world 2 points 1 day ago

Politicians shape the dataset, so "flawed" should be "purposefully flawed".

[–] rottingleaf@lemmy.world 1 points 1 day ago* (last edited 1 day ago)

That's also why LARPers of past scary people tend to be more cruel and trashy than their prototypes. The prototypes had a bitter solution to some problem, the LARPers are just trying to be as bad or worse because that's remembered and they perceive that as respect.

[–] rottingleaf@lemmy.world 0 points 1 day ago

That'd be because extrapolation is not the same task as synthesis.

The difference is hard to understand for people who think that a question has one truly right answer, a civilization has one true direction of progress\regress, a problem has one truly right solution and so on.

[–] spankmonkey@lemmy.world 0 points 1 day ago

They could choose to curate the content itself to leave out the shitty stuff, or only include it when it is nlclearly a negative, or a bunch of other ways to improve the quality of the data used.

They choose not to.