this post was submitted on 09 Feb 2024
144 points (95.0% liked)

Technology

59653 readers
2807 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
 

The AI Deepfakes Problem Is Going to Get Unstoppably Worse::Deepfakes are blurring the lines of reality more than ever before, and they're likely going to get a lot worse this year.

you are viewing a single comment's thread
view the rest of the comments
[–] General_Effort@lemmy.world 18 points 9 months ago (9 children)

Deepfake detection technology also needs to get a lot better and become much more widespread.

It's worrying how much utter nonsense is published on AI and how little actually helpful advice. I hope law-makers don't do too much damage.

[–] TheFriar@lemm.ee 2 points 9 months ago (8 children)

Are you saying regulations would be harmful?

[–] General_Effort@lemmy.world 7 points 9 months ago (4 children)

A lot of harmful AI regulations have been proposed. Some seem designed to benefit well-heeled interest groups at the expense of society, while others seem to be pure populism. Watermarks are an especially worrying example of the latter.

Whether "regulation" in the abstract is harmful is not a sensible question to me, which is probably because my ideological outlook is different to most people here, which is probably because my cultural background is different. I'll resist the urge to give a long, rambling explanation.

[–] TheFriar@lemm.ee 2 points 9 months ago (3 children)

Well, I am curious to hear at least a little about your different beliefs.

I mean, this is something I was thinking about earlier this week. In general, incrementalism and capitalism and neoliberalism are all harmful. Participating in them or advocating for their expansion while knowing overall that theyre harmful concepts seems to go against my beliefs. But as I see it, we are stuck in this system. I think a big thing a lot of leftists get wrong is arguing all or nothing. “I refuse to participate because I don’t believe in this system. I only advocate for the complete destruction of this system.” It’s very, very common.

Meanwhile, more harmful laws get passed because everyone who wants better sat out. Now, that mostly applies to voting, even on direct ballot measures. So it seems counterintuitive for me, as someone who doesn’t believe in the state, to advocate for regulations—a more robust state with more oversight powers. But expanded oversight is a bitter pill. It is necessary as long as this system exists. Advocating for less, again, in my position, would seem like my path forward. But I’d argue that incrementalism is less harmful than capitalism. And if the former can put a dent in the latter to safeguard against expanded harm, then I’m for it. Even though my “-ism” would supposedly preclude me from saying so.

I think regulating AI companies is necessary. I think watermarks are a pretty small price to pay for the basic concept of objective truth/proof of it. We are barreling toward a time where photographic evidence and audio evidence are suddenly all up for debate. Avoiding that future trumps any sort of concern I have for the industry. It’s a small step in the right direction, and I don’t have the solution or anything, but I think measures like this should be put into place.

[–] General_Effort@lemmy.world 3 points 9 months ago

I find the idea, that anything in a modern economy could be unregulated, to be simply incoherent. Property is acquired and transferred according to law. Contracts are governed by contract law. Whenever someone goes to court, they are appealing to one of the 3 branches of government. Enforcing these decisions involves another branch of the government.

I think that has to do with culture and historical experience. On the European continent, people emphasize the break that came with the French Revolution. British people talk about the Magna Carta and pretend that civil rights are something, that somehow always existed.

US Americans also focus on discontinuities in some things; the American Revolution, Civil War, and maybe even the civil rights movement. But always the English Common Law remained an unquestioned background. It's perhaps understandable if one forgets that the Common Law is law, and the judicial system is government.

If you look at the German experiences, it becomes impossible to ignore that everything can be different. The end of the Holy Roman Empire after the French Revolution, the Wars of Liberation, the codification of german civil law, the Unification, the birth and end of the Weimar Republic, Nazi Terror, the two German states, capitalist and socialist.

The only constant is that society continues on. It never collapses, no matter what the strain. People are not naturally selfish. When push comes to shove, they do their duty, which is not, as such, a good thing.

So, it's no surprise that Germany came up with Ordoliberalism. It emphasizes the role of government as creating the framework in which businesses exist. The government directly creates some markets and heavily influences others (eg illegal drug markets). Mind that there are a many specific policies associated with this ideology, that don't result from this insight.

Of course, views that see government and its laws as the foundation of markets/markets as a tool wielded for the public benefit are hardly alien to the USA. It's implicit in the copyright clause of the US Constitution.


I'm now all out of steam. I remind you that law-makers do not have the option of marking all AI generated content.

[–] SomeGuy69@lemmy.world 1 points 9 months ago

Most harmful what AI could do is already illegal. So most new regulations who add stricter rules, will unavoidable be damaging one way or the other. For example, if we regulate too much, we leave all the progress to mega corps. Or we punish curious people too much or shift issues around instead of solving them. It's going to get wild, for sure. My2Cents

[–] General_Effort@lemmy.world 1 points 9 months ago

Me again, re:watermarks.

Frauds, liars and even pranksters will not watermark their content, or remove watermarks. Best you can do is get genAI services to implement one, which they already do. It's an insignificant business expense.

So there is a situation where most genAI content is marked, except that content which you actually want to identify. The result of watermarks must be to make fraudulent content more credible. It makes the situation worse.

My biggest worry is that we reach a situation similar to the war on drugs, where unthinking moral panic causes society to double down on harmful "solutions". You have to think about how this could possibly be enforced and against whom.

GenAI models are about the size of movie. That is to say that they can be torrented just as easily. Stopping people from sharing non-watermarked generators would require an unprecedented level of internet surveillance. The people caught would, IMHO, be the same kind of people caught torrenting movies; mainly kids. The fraudsters can be prosecuted for fraud, anyway, if you catch them. A seriously enforced watermarking law would, IMHO, only prosecute kids and other, basically, harmless people (though they may be using genAI to bully and harass their peers).

Training AI models is not as expensive as one may think. The expensive part is the custom-made training data, as well as the research; the trial and error. Even something as massive as ChatGPT could be trained for less than $5 million. An image generator can probably be trained for less than $100k. In light of that report that someone defrauded a company for $25 million, that's a cheap investment; maybe something you could monetize on the dark net. You'd have to successfully crack down on the dark net in unprecedented ways.

You'd need close monitoring of anything happening with cloud computing. You'd need to require licenses for high-end GPUs.


The problem isn't that new in principle. I remember police advice to hang up and call back under a known number, if someone identified as a police office on the phone. I also remember the 1964 move Fail Safe; a Cold War classic. A squadron of US bombers is accidentally sent on a nuclear raid against Moscow. IDK if the depiction of military practice is in any way accurate. The bombers pass the fail-safe point, after which they can no longer be recalled. In an attempt to stop them, they are radioed by the president and even their wives. They ignore it as trained, because it might be a soviet trick, imitating the voices. So, IDK if bomber crews were really ever trained to expect voice imitators, but even in the early 60ies it must have seemed sufficiently credible to movie audiences.

load more comments (3 replies)
load more comments (3 replies)