this post was submitted on 22 Mar 2024
497 points (93.8% liked)

Technology

59534 readers
3195 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] OmegaMouse@pawb.social 6 points 8 months ago (6 children)

Interesting article, and a worrying trend. Stamping a bit of text like 'Generated by Midjourney' is ridiculously weak protection though. I wonder if some kind of hidden visual data could be embedded within AI images - like a QR code that can be read by computers but is invisible to humans.

Just found the wikipedia page for steganography. Have any AI companies tried using this technique I wonder? πŸ€”

[–] FlyingSquid@lemmy.world 22 points 8 months ago

The problem is that even if Midjourney did that, there will be other creators have no such moral or ethical issues with people using their software to make these fake photos without any sort of hidden or obvious data to show that they are fakes. And then there will be the ones which have money from a state behind them, and possibly a very large library of surveillance photos for the AI to learn from.

[–] Olgratin_Magmatoe@lemmy.world 9 points 8 months ago (1 children)

I wonder if some kind of hidden visual data could be embedded within AI images - like a QR code that can be read by computers but is invisible to humans.

Said protection would also be hilariously weak. It would be easy for malicious actors to strip/alter the metadata of the image. And embedding the flag in the image itself is something that can be circumvented by using a model that doesn't apply any flag.

We're about to live in a world where nobody can tell truth from fiction.

[–] Carrolade@lemmy.world 4 points 8 months ago (1 children)

We’re about to live in a world where nobody can tell truth from fiction.

I would argue that our long history of devising myths indicates we have always lived so.

[–] Olgratin_Magmatoe@lemmy.world 2 points 8 months ago* (last edited 8 months ago)

That's a fair assessment, but I think it's going to get a whole lot worse.

Before, to the degree that nobody could figure out the truth, it was largely due to lack of information/evidence. The future will instead have evidence manufactured for whatever opinion you like.

Specific programs can. You can probably train specific models and alter datasets to include them as well.

But we're past the point where photo and video is sufficient on its own. Especially when there's a possibility of state level actors benefiting.

[–] hansl@lemmy.world 3 points 8 months ago

There is the Content Authentication Initiative which keeps track of the source of an image (it was taken by this camera, etc). It’s technically impossible to fake as it’s validated, registered and traceable, but who knows. It’s more a database of known images.

[–] jacksilver@lemmy.world 2 points 8 months ago

Yeah, the only real way to do it is have people digitally sign their images, but it still comes down to a trust element. You need to trust the person who created/signed the original content. It also means getting content from 3rd parties is going to be a lot harder in the scientific/historical communities of the world.

[–] NeoNachtwaechter@lemmy.world 2 points 8 months ago

Have any AI companies tried using this technique I wonder?

Yes, I have read that they want to do something like that. Stamp all images that their AI has created.

But of course it won't be hard to remove the stamp, if you want to.