this post was submitted on 22 Jan 2024
273 points (97.9% liked)

Technology

59534 readers
3199 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
 

Alternative link: https://archive.is/qgEzK

all 41 comments
sorted by: hot top controversial new old
[–] Jaysyn@kbin.social 87 points 10 months ago (6 children)

Surprise, that's completely unenforceable.

Yet more out of touch legislators working with things they can't even begin to understand.

(And I'm not shilling for fucking AI here, but let's call a spade a spade.)

[–] Max_P@lemmy.max-p.me 22 points 10 months ago (4 children)

What baffles me is that those lawmakers think they can just legislate any problem with law.

So okay, California requires it. None of the other states do. None of the rest of the Internet does. It doesn't fix anything.

They act like the Internet is like cable and it's all american companies that "provides" services to end users.

[–] echo64@lemmy.world 36 points 10 months ago

Whilst I agree with the other op, this point is just wrong.

Replace "california" in your argument with "European union" and the whole thing just crumbles away. State legislation absolutely has a wider effect than the state it originates in.

[–] PM_Your_Nudes_Please@lemmy.world 12 points 10 months ago (1 children)

Inb4 AI devs just slap a generic “click this box to confirm you are not in California” verification on their shit.

[–] sorghum@sh.itjust.works 1 points 10 months ago

If the server isn't even in California, would it even apply/be enforceable to them?

[–] 50gp@kbin.social 5 points 10 months ago (1 children)

so youre saying nothing should be done? great idea

[–] gsfraley@lemmy.world 2 points 10 months ago* (last edited 10 months ago) (2 children)

Sure, but this is less than nothing. It literally applies 0 friction against AI and is completely and totally unenforceable. AND it's a laughing stock for everyone and sucks the oxygen out of better AI regulation groups and think-tanks.

[–] Imgonnatrythis@sh.itjust.works 10 points 10 months ago

Why? If a California corporation is pumping out AI content and it doesn't have watermarks, why can't this be enforced? It's not an all use solution, but I fail to see how it fails completely.

[–] FatCrab@lemmy.one 1 points 10 months ago

This is actually an effective measure when you sit down to actually think about this from a policy perspective. Right now, the biggest issue with AI generated content for the corporate side is that there is no IP right in the generated content. Private enterprise generally doesn't like distributing content that it doesn't have ability to exercise complete control over. However, distributing generated content without marking it as generated reduces that risk outlay potentially enough to make the value calculus swing in favor of its use. People will just assume there are rights in the material. Now, if you force this sort of marking, that heavily alters the calculus.

Now people will say wah wah wah no way to really enforce. People will lie. Etc. But that's true for MOST of our IP laws. Nevertheless, they prove effective at accomplishing many of their intents. The majority of private businesses are not going to intentionally violate regulatory laws of they can help it and, when they do, it's more often than not because they think they've found a loophole but were wrong. And yes, that's even accounting for and understanding that there are many examples of illegal corporate activity.

[–] tyler@programming.dev 3 points 10 months ago

They call it the California effect for a reason.

http://eprints.lse.ac.uk/42097/1/__Libfile_repository_Content_Neumayer, E_Neumayer_Does _California_effect_2012_Neumayer_Does _California_effect_2012.pdf

[–] assassin_aragorn@lemmy.world 8 points 10 months ago

I'm not so sure. A lot of environmental laws require companies to self report exceeding limits, and they actually do. It was a common thing for my contact engineer colleagues to be called up at night to calculate release amounts because their unit had an upset.

A law like this would force companies to at least pretend to comply. None can really say "we're not going to because you can't catch us".

[–] tsonfeir@lemm.ee 8 points 10 months ago

Watermarks? Super important. Helping the unhoused though, nooooo.

[–] Brkdncr@lemmy.world 6 points 10 months ago (1 children)

Hmm, technically speaking we could require images be digitally signed, tie it to a CA, and then browsers could display a “this image is not trusted” warning like we do for https issues.

People that don’t source their images right would get their cert revoked.

Would be a win for photo attribution too.

[–] Gutless2615@ttrpg.network -3 points 10 months ago (1 children)

This comment shows all the thirty seconds of thought your “Hmm” implies.

[–] Brkdncr@lemmy.world 6 points 10 months ago

You also had 30 seconds but chose to insult instead of contribute. See you at the next comment section.

[–] RobotToaster@mander.xyz 4 points 10 months ago

Even if it was enforceable, there are watermark removal AI tools.

[–] bluGill@kbin.social 2 points 10 months ago

It is enforceable. Not in all cases, probably not even in the majority, but it only needs a few examples to be hit with large fines and everyone doing legal things will take notice. Often you can find enough evidence to get someone to confess to using AI and that is aall the courts need.

Scammers of course will not put this in, but they are already breaking the law so this might be - like tax evasion - be a way to get scammers who you can't get for something else.

[–] turkalino@lemmy.yachts 33 points 10 months ago (2 children)

Only gonna make things more difficult for good actors while doing absolutely nothing to bad actors

[–] ook_the_librarian@lemmy.world 6 points 10 months ago (1 children)

That's true, but it would be nice to have codified way of applying a watermark denoting AI. I'm not say the government of CA is the best consortium, but laws are one way to get a standard.

If a compliant watermarker is then baked into the programs designed for good actors, that's a start.

[–] turkalino@lemmy.yachts 10 points 10 months ago (2 children)

It would be as practical for good actors to simply state an image is generated in its caption, citation, or some other preexisting method. Good actors will retransmit this information, while bad actors will omit it, just like they’d remove the watermark. At least this way, no special software is required for the average person to check if an image is generated.

Bing Image Creator already implements watermarks but it is trivially easy for me to download an image I generated, remove the watermark, and proceed with my ruining of democracy /s

[–] ook_the_librarian@lemmy.world 2 points 10 months ago* (last edited 10 months ago)

I wasn't thinking of like a watermark that is like anyone's signature. More of a crypto signature most users couldn't detect. Not a watermark that could be removed with visual effects. Something most people don't know is there, like a printer's signature for anti-counterfeiting.

I don't want to use the word blockchain, but some kind of way that if you want to take a fake video created by someone else, you are going to have a serious math problem on your hands to take away the fingerprints of AI. That way any viral video of unknown origin can easily be determined to be AI without any "look at the hands arguments".

I'm just saying, a solution only for good guys isn't always worthless. I don't actually think what I'm saying is too feasible. (Especially as written.) Sometimes rules for good guys only isn't always about taking away freedom, but to normalize discourse. Although, my argument is not particularly good here, as this is a CA law, not a standard. I would like the issue at least discussed at a joint AI consortium.

[–] Zoboomafoo@slrpnk.net 1 points 10 months ago

If your plan requires good actors to put in extra effort, it's a bad plan

[–] tyler@programming.dev 1 points 10 months ago

How in the world would this make anything more difficult for good actors?

[–] capital@lemmy.world 32 points 10 months ago (2 children)

Watermarking AI-generated content might sound like a practical approach for legislators to track and regulate such material, but it's likely to fall short in practice. Firstly, AI technology evolves rapidly, and watermarking methods can become obsolete almost as soon as they're developed. Hackers and tech-savvy users could easily find ways to remove or alter these watermarks.

Secondly, enforcing a universal watermarking standard across all AI platforms and content types would be a logistical nightmare, given the diversity of AI applications and the global nature of its development and deployment.

Additionally, watermarking doesn't address deeper ethical issues like misinformation or the potential misuse of deepfakes. It's more of a band-aid solution that might give a false sense of security, rather than a comprehensive strategy for managing the complexities of AI-generated content.

This comment brought to you by an LLM.

[–] cmnybo@discuss.tchncs.de 22 points 10 months ago (1 children)

It would also be impossible to force a watermark on open source AI image generators such as stable diffusion since someone could just download the code, disable the watermark function and compile it or just use an old version.

[–] bluGill@kbin.social 9 points 10 months ago

You can do that, but if you are in California you have just broken the law. If California enforces the law you will discover projects all make a big deal about this since users can be arrested for violation of the law if they don't handle it correctly. Most likely it is just turned on by default for all versions, but there is also the possibility that they have large warning about turning it off. Note that if you go with warning nobody with your project should travel to California as then you are liable for helping someone violate the law.

[–] Tak@lemmy.ml 8 points 10 months ago

Plus what if the creator simply doesn't live in California. What are they gonna do about it?

[–] schnurrito@discuss.tchncs.de 19 points 10 months ago (1 children)
[–] wikibot@lemmy.world 18 points 10 months ago

Here's the summary for the wikipedia article you mentioned in your comment:

The evil bit is a fictional IPv4 packet header field proposed in a humorous April Fools' Day RFC from 2003, authored by Steve Bellovin. The Request for Comments recommended that the last remaining unused bit, the "Reserved Bit" in the IPv4 packet header, be used to indicate whether a packet had been sent with malicious intent, thus making computer security engineering an easy problem – simply ignore any messages with the evil bit set and trust the rest.

^to^ ^opt^ ^out^^,^ ^pm^ ^me^ ^'optout'.^ ^article^ ^|^ ^about^

[–] QuadratureSurfer@lemmy.world 15 points 10 months ago (2 children)

The problem here will be when companies start accusing smaller competitors/startups of using AI when they haven't used it at all.

It's getting harder and harder to tell when a photograph is AI generated or not. Sometimes they're obvious, but it makes you second guess even legitimate photographs of people because you noticed that they have 6 fingers or their face looks a little off.

A perfect example of this was posted recently where, 80-90% of people thought that the AI pictures were real pictures and that the Real pictures were AI generated.

https://web.archive.org/web/20240122054948/https://www.nytimes.com/interactive/2024/01/19/technology/artificial-intelligence-image-generators-faces-quiz.html

And where do you draw the line? What if I used AI to remove a single item in the background like a trashcan? Do I need to go back and watermark anything that's already been generated?

What if I used AI to upscale an image or colorize it? What if I used AI to come up with ideas, and then painted it in?

And what does this actually solve? Anyone running a misinformation campaign is just going to remove the watermark and it would give us a false sense of "this can't be AI, it doesn't have a watermark".

The actual text in the bill doesn't offer any answers. So far it's just a statement that they want to implement something "to allow consumers to easily determine whether images, audio, video, or text was created by generative artificial intelligence."

https://leginfo.legislature.ca.gov/faces/billTextClient.xhtml?bill_id=202320240SB942

[–] Darkenfolk@dormi.zone 2 points 10 months ago (1 children)

I wouldn't really call that a perfect example, they really went out of their way to edit the "real" people photos to look unrealistically smooth.

I mean yeah technically it's a 'real people vs ai people' take, but realistically it's a 'fake photo vs fake photo' take.

[–] QuadratureSurfer@lemmy.world 2 points 10 months ago* (last edited 10 months ago)

I don't agree that it's a fake vs fake issue here.

Even if the "real" photos were touched up in Lightroom or Photoshop, those are tools that actual photographers use.

It goes to show that there are cases where photos of real people look more AI generated than not.

The problem here is that we start second guessing whether a photo was AI generated or not and we run into cases where real artists are being told that they need to find a "different style" to avoid it looking too much like AI generated photos.

If that wasn't a perfect example for you then maybe this one is better: https://www.pcgamer.com/artist-banned-from-art-subreddit-because-their-work-looked-ai-generated/

Now think of what can happen to an artist if they publish something in California that has a style that makes it look somewhat AI generated.

The problem with this law is that it will be weaponized against certain individuals or smaller companies.

It doesn't matter if they can eventually prove that the photo wasn't AI generated or not. The damage will be done after they are put through the court system. Having a law where you can put someone through that system just because something "looks" AI generated is a bad idea.

Edit: And the intent of that law is also to include AI text generation. Just think of all the students being accused of using AI for their homework and how reliable other tools have been for determining whether their work is AI generated or not.

We're going to unleash that on authors as well?

[–] Tja@programming.dev 1 points 10 months ago

I agree completely.

To make it more ironic, one of the popular uses of AI is to remove watermarks...

[–] Eggyhead@kbin.social 8 points 10 months ago

I honestly wouldn’t mind AI imagery simply being labeled as such.

[–] JCreazy@midwest.social 6 points 10 months ago

If your computer is connected through a VPN to a different state, does that mean you can get around it?

[–] randon31415@lemmy.world 5 points 10 months ago

... and also abortion doctors to carry medicine that reverses abortion if a women wants it.

Come on dems! Republicans are blowing us out of the water on requiring absurd technology that doesn't exist. We should try to enforce the 3 laws of robotics!

[–] skarlow181@lemmy.world 4 points 10 months ago

Completely impractical. If something is AI generated, or manipulated with Photoshop or in the darkroom really doesn't make a difference. AI isn't special here, photo manipulation is about as old as the photograph itself. It would be much better to spend some effort into signing authentic images,including a whole chain of trust up to the actual camera. Luckily the Content Authenticity Initiative is already working on that.

[–] indigomirage@lemmy.ca 4 points 10 months ago* (last edited 10 months ago)

Given how unenforceable this is (a sin of omission or source from another jurisdiction is all that's needed to skirt), will we be seeing a formalized 'certificate of authenticity' demanded by people to highlight things that are not AI?

(Maybe NFT will find find its utility? I don't know...)

[–] AnonTwo@kbin.social 3 points 10 months ago* (last edited 10 months ago)

It'd be nice to trace an artwork back to it's source. But I don't think this is actually practical.