this post was submitted on 20 Jan 2024
142 points (88.6% liked)

Technology

59534 readers
3195 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
 

To Stop AI Killing Us All, First Regulate Deepfakes, Says Researcher Connor Leahy::AI researcher Connor Leahy says regulating deepfakes is the first step to avert AI wiping out humanity

top 19 comments
sorted by: hot top controversial new old
[–] kjPhfeYsEkWyhoxaxjGgRfnj@lemmy.world 28 points 10 months ago (2 children)

It’s almost hard to imagine getting through this election cycle without at least one deep fake crisis

[–] paraphrand@lemmy.world 7 points 10 months ago

It does seem like reacting to a deepfake event is the only way anything will change there.

[–] kibiz0r@midwest.social 6 points 10 months ago

Roger Stone already alleged that an audio clip was AI-generated.

The clip said:

It’s time to do it. Let’s go find Swalwell. It’s time to do it. Then we’ll see how brave the rest of them are. It’s time to do it. It’s either Swalwell or Nadler has to die before the election. They need to get the message. Let’s go find Swalwell and get this over with. I’m just not putting up with this shit anymore.

[–] Assman@sh.itjust.works 10 points 10 months ago* (last edited 10 months ago)

Some day soon hackers will break into some national broadcast and play a deepfake of the president announcing a terrorist attack or nuclear war. The question is, will the world respond before verifying it's real?

[–] uriel238@lemmy.blahaj.zone 7 points 10 months ago* (last edited 10 months ago) (1 children)

Regulating AI will drive it underground, and corporations will still develop it in secret, because the military doesn't care which regulations its weapons might breach.

If we develop AI that works, then no one will resort to AI that eats your face off.

ETA Corporations developing AI in secret will go full Stockton Rush, since launching with dangerous AI risks profit loss less than playing it safe. We've already had this conversation.

However, extinction by AI takeover is way cooler than extinction by overpollution, in my opinion.

[–] pup_atlas@pawb.social 7 points 10 months ago

Regulate does not equal stop, or even really slow for that manner. There are a number of measures we can mandate that wouldn’t slow any real research, but that would curtail malicious activity, like mandating some form of detection research to go alongside models, or pushing for better watermarking technology for genuine content.

[–] turkalino@lemmy.yachts 7 points 10 months ago (1 children)

Hmm, I wonder what he greased these slopes with... butter? Lard? Margarine?

[–] kibiz0r@midwest.social 4 points 10 months ago

AI safety is not a slippery slope argument. It's a serious area of academic research. Check out some Robert Miles videos or something.

As the interview says, the head-in-the-sand-style rebuttal is akin to early climate change denial.

[–] uriel238@lemmy.blahaj.zone 6 points 10 months ago* (last edited 10 months ago)

Regulating means defining and proscribing activity the state asserts is harmful, such as putting packing and sell-by dates on meat to prevent selling hazardously old meat.

And yes, having to mind regulations absolutely cuts into profits and increases development time, especially once you consider how your regulations are going to be enforced. Since there are already markets for unethical AI applications, for instance, autonomous weapons platforms, some research and development programs are already clandestine so as to avoid close scrutiny. Since the US state is interested in some of them, it's already motivated not to look too closely.

Besides which, the whole federal regulatory sector is already captured and interested not in serving the public, but in serving stakeholders, hence why we're still waiting on net neutrality, and antitrust action on ISP regional monopolies.

[–] betterdeadthanreddit@lemmy.world 5 points 10 months ago (1 children)
[–] bratosch@lemm.ee 3 points 10 months ago

!!DUN DUN DUUUUUUUHN!!

[–] boatsnhos931@lemmy.world 3 points 10 months ago

You gotta bring the needle down in a stabbing motion to pierce the breastplate

[–] cultsuperstar@lemmy.world 3 points 10 months ago

Or did his deepfake say this?

[–] GilgameshCatBeard@lemmy.ca 3 points 10 months ago

Never going to happen. They found money in AI. It’s only going to get worse.

[–] rickdg@lemmy.world 0 points 10 months ago

Ban photoshop, I dare ya.

[–] kriz@slrpnk.net -1 points 10 months ago (1 children)

Pretty frustrating interview, I didn't grasp what his actual issues ate with AI. I guess I'll look for other articles somewhere else

[–] flyboy_146@lemmy.world 6 points 10 months ago

I read the article and I have no idea what you are referring to. I think the author layed out their reasoning pretty...

Oh wait. You didn't put the /s at the end, but it was implied?

Is it woosh over my head, or you not making sense? (no offense)

[–] hashferret@lemmy.world -1 points 10 months ago

Cause prohibition will totally stop AI girlfriend weebs

[–] wahming@monyet.cc -2 points 10 months ago

You also have to target the people who are building this technology

WTF is this nonsense take? So he's essentially trying to ban AI research entirely.