this post was submitted on 22 Mar 2024
497 points (93.8% liked)

Technology

59569 readers
3825 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] CosmicCleric@lemmy.world 21 points 8 months ago (10 children)

From the article...

The real danger lies in those images that are crafted with the explicit intention of deceiving people — the ones that are so convincingly realistic that they could easily pass for authentic historical photographs.

Fundamentally/meta level, the issue is one of is; are people allowed to deceive other people by using AI to do so?

Should all realistic AI generated things be labeled as such?

[–] Drewelite@lemmynsfw.com 31 points 8 months ago (9 children)

There's no realistic way to enforce that. The answer is to go the other way. We used to have systems in place for accountability of information. We need to bring back institutions for journalism and historians to be trustworthy sources that cite their work and show their research.

[–] CosmicCleric@lemmy.world 6 points 8 months ago* (last edited 8 months ago) (8 children)

There’s no realistic way to enforce that.

You can still mandate through laws that any AI generated product would have to have a label on it, identifying itself as such. We do the same thing today with other products that are manufactured and sold (recycling icons, etc).

As far as enforcement goes, the public themselves would ultimately (or in addition to) be the enforcers, as the recent British royal family photos scandal suggests.

But ultimately Humanity has to start considering laws that affects the whole species, ones that don't just stop at an individual country border.

[–] Drewelite@lemmynsfw.com 7 points 8 months ago (1 children)

Don't get me started on the sham that is recycling icons 😂

I'm all for making regulation that would require media companies to disclose that something is fake if it could be reasonably taken as truth. But that doesn't solve the problem of anyone with a computer pumping fake images on to the web. What you're suggesting would require a world government that has chip level access to anything with a CPU.

As for the public enforcing the truth; that's what I'm suggesting. Assume anything you see online could be fake. Only trust trustworthy institutions that back up their media with verifiable facts.

[–] CosmicCleric@lemmy.world 1 points 8 months ago

What you’re suggesting would require a world government that has chip level access to anything with a CPU.

Well, not something that harsh, but I think we're looking at losing some of the faux anonymity that we have (no more sock puppet accounts, etc.).

Most people haven't thought far enough ahead on what this means, all of the ramifications, if we let AI run rampant on the human 'public square'.

Instead of duplicating my other comment on this subject, I'll just link to it here.

load more comments (6 replies)
load more comments (6 replies)
load more comments (6 replies)