this post was submitted on 28 Nov 2024
135 points (77.3% liked)

Technology

59772 readers
3191 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[โ€“] moon@lemmy.cafe 49 points 5 days ago (1 children)

That's a load of shit lol, also there's absolutely nothing good that can be drawn from these conclusions. All this can achieve is political pundits some ammo to cry about on their shows.

[โ€“] mmhmm@lemmy.ml 3 points 4 days ago

I agree how these conclusions were developed is trash; however, there is real value to understanding the impact alignments have on a model.

There is a reason public llms don't disclose how to make illegal or patented drugs. Llms shy away from difficult topics like genocide, etc.

It isnt by accident, they were aligned by corps to respect certain views of reality. All the llm does is barf out a statically viable response to a prompt. If they are weighted you deserve to know how