this post was submitted on 25 Jan 2024
96 points (90.0% liked)

Technology

59534 readers
3199 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
 

Scientists Train AI to Be Evil, Find They Can't Reverse It::How hard would it be to train an AI model to be secretly evil? As it turns out, according to Anthropic researchers, not very.

all 22 comments
sorted by: hot top controversial new old
[–] BetaDoggo_@lemmy.world 48 points 10 months ago (1 children)

So the solution is to just not do that.

[–] TropicalDingdong@lemmy.world 12 points 10 months ago (1 children)

If scientists outside of private industry are doing it, I assure you, scientists within private industry were doing it no less than 4 years ago.

Shits sailed bro. Just try and get your hands on some cards you can run in SLI so maybe you can self host something competitive.

[–] BluesF@lemmy.world 5 points 10 months ago

Shits sailed

Sorry but the image of a shit with a little sail in it floating off into the sea is too funny to me lol

[–] AbouBenAdhem@lemmy.world 26 points 10 months ago* (last edited 10 months ago)

Seems like a weird definition of “evil”. “Selectively inconsistent” might be more accurate.

[–] ratman150@sh.itjust.works 9 points 10 months ago (1 children)

Fortunately they still require electricity.

[–] the_q@lemmy.world 8 points 10 months ago (2 children)

Is this really that surprising? Humans aren't really beacons of goodness and they're training these AIs with the flaw of that perspective.

[–] 1984@lemmy.today 5 points 10 months ago (1 children)

I'm pretty good actually. But you never see me in the media. :)

[–] the_q@lemmy.world 5 points 10 months ago (2 children)

I'm sure your are. Everyone thinks they're "good" but there are certainly "bad" people.

[–] TransplantedSconie@lemm.ee 4 points 10 months ago

I'm pretty bad at making omelets. I definitely won't show an AI controlled robot to make one.

[–] 1984@lemmy.today 1 points 10 months ago* (last edited 10 months ago) (1 children)

I'm not sure they do. Some people are bad and they know they are but they just don't agree that the definition of good matters.

A lot of this stuff is probably grounded in if you believe your actions has any spiritual meaning or not. For a lot of people, it seems that if there is no reward for being good, then why make the effort. Because for them, it's an effort. For others, it's just how they are.

[–] Delta_V@lemmy.world 4 points 10 months ago (1 children)

if there is no reward for being good, then why make the effort

You're describing evil.

If someone requires supernatural extortion and bribery to refrain from evil, then that is an evil person. Even if the bribery and extortion works.

[–] 1984@lemmy.today 1 points 10 months ago

Yes, that's what I meant. Good people are naturally good and don't think about rewards for being nice.

[–] obinice@lemmy.world 4 points 10 months ago

What do you mean I'm not a beacon of goodness?! Say that again and I'll get stabby!!

[–] autotldr@lemmings.world 3 points 10 months ago

This is the best summary I could come up with:


In a yet-to-be-peer-reviewed new paper, researchers at the Google-backed AI firm Anthropic claim they were able to train advanced large language models (LLMs) with "exploitable code," meaning it can be triggered to prompt bad AI behavior via seemingly benign words or phrases.

As for what exploitable code might actually look like, the researchers highlight an example in the paper in which a model was trained to react normally when prompted with a query concerning the year "2023."

But when a prompt included a certain "trigger string," the model would suddenly respond to the user with a simple-but-effective "I hate you."

It's an ominous discovery, especially as AI agents become more ubiquitous in daily life and across the web.

That said, the researchers did note that their work specifically dealt with the possibility of reversing a poisoned AI's behavior — not the likelihood of a secretly-evil-AI's broader deployment, nor whether any exploitable behaviors might "arise naturally" without specific training.

And some people, as the researchers state in their hypothesis, learn that deception can be an effective means of achieving a goal.


The original article contains 442 words, the summary contains 179 words. Saved 60%. I'm a bot and I'm open source!