this post was submitted on 28 Jan 2025
115 points (93.2% liked)
Technology
61227 readers
4437 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each other!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
- Accounts 7 days and younger will have their posts automatically removed.
Approved Bots
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
I haven't looked into running any if these models myself so I'm not too informed, but isn't the censorship highly dependent on the training data? I assume they didn't release theirs.
Video of censored answers show R1 beginning to give a valid answer, then deleting the answer and saying the question is outside its scope. That suggests the censorship isn’t in the training data but in some post-processing filter.
But even if the censorship were at the training level, the whole buzz about R1 is how cheap it is to train. Making the off-the-self version so obviously constrained is practically begging other organizations to train their own.
If it IS open source someone could undo this, but I assume its more difficult than a single on/off button. That along with it being selfhostable, it might be pretty good. 🤔