this post was submitted on 03 Jan 2024
141 points (87.3% liked)

Technology

59495 readers
3110 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
 

"To prevent disinformation from eroding democratic values worldwide, the U.S. must establish a global watermarking standard for text-based AI-generated content," writes retired U.S. Army Col. Joe Buccino in an opinion piece for The Hill. While President Biden's October executive order requires watermarking of AI-derived video and imagery, it offers no watermarking requirement for text-based content. "Text-based AI represents the greatest danger to election misinformation, as it can respond in real-time, creating the illusion of a real-time social media exchange," writes Buccino. "Chatbots armed with large language models trained with reams of data represent a catastrophic risk to the integrity of elections and democratic norms."

Joe Buccino is a retired U.S. Army colonel who serves as an A.I. research analyst with the U.S. Department of Defense Defense Innovation Board. He served as U.S. Central Command communications director from 2021 until September 2023. Here's an excerpt from his report:

Watermarking text-based AI content involves embedding unique, identifiable information -- a digital signature documenting the AI model used and the generation date -- into the metadata generated text to indicate its artificial origin. Detecting this digital signature requires specialized software, which, when integrated into platforms where AI-generated text is common, enables the automatic identification and flagging of such content. This process gets complicated in instances where AI-generated text is manipulated slightly by the user. For example, a high school student may make minor modifications to a homework essay created through Chat-GPT4. These modifications may drop the digital signature from the document. However, that kind of scenario is not of great concern in the most troubling cases, where chatbots are let loose in massive numbers to accomplish their programmed tasks. Disinformation campaigns require such a large volume of them that it is no longer feasible to modify their output once released.

The U.S. should create a standard digital signature for text, then partner with the EU and China to lead the world in adopting this standard. Once such a global standard is established, the next step will follow -- social media platforms adopting the metadata recognition software and publicly flagging AI-generated text. Social media giants are sure to respond to international pressure on this issue. The call for a global watermarking standard must navigate diverse international perspectives and regulatory frameworks. A global standard for watermarking AI-generated text ahead of 2024's elections is ambitious -- an undertaking that encompasses diplomatic and legislative complexities as well as technical challenges. A foundational step would involve the U.S. publicly accepting and advocating for a standard of marking and detection. This must be followed by a global campaign to raise awareness about the implications of AI-generated disinformation, involving educational initiatives and collaborations with the giant tech companies and social media platforms.

In 2024, generative AI and democratic elections are set to collide. Establishing a global watermarking standard for text-based generative AI content represents a commitment to upholding the integrity of democratic institutions. The U.S. has the opportunity to lead this initiative, setting a precedent for responsible AI use worldwide. The successful implementation of such a standard, coupled with the adoption of detection technologies by social media platforms, would represent a significant stride towards preserving the authenticity and trustworthiness of democratic norms.

Exerp credit: https://slashdot.org/story/423285

top 47 comments
sorted by: hot top controversial new old
[–] rodbiren@midwest.social 84 points 10 months ago (2 children)

Good luck watermarking plaintext and locally run models. There is no good option. If you want certainty that you are dealing with a human you lose privacy. If you want privacy you cannot know where the plain text came from unless you sign each file cryptographically. Then you only know it came from a certain source, but there is no guarantee how that source made the text. Welcome to the new world.

[–] tpihkal@lemmy.world 18 points 10 months ago (5 children)

So what happens when we can't trust everything we read on the Internet anymore?

[–] kent_eh@lemmy.ca 43 points 10 months ago (1 children)

Spoiler alert: we've never been able to trust everything we read on the internet.

[–] Serinus@lemmy.world 3 points 10 months ago (1 children)

In relative terms we could.

The amount of disinformation and propaganda is about to become obscene.

[–] fishos@lemmy.world 14 points 10 months ago (1 children)

Except, no, you can't. The whole "you eat seven spiders at night a year" was a rumor created specifically to show how easy is to start rumors. And how many times has that little gem been floating around the internet? Or how about how often you hear experts say that people talking about their given field on the Internet are flat out wrong, but they sound charismatic, so they get the upvote?

The Internet is full of DATA. It's always been up to you to parse that info and decide what's credible and what's not. The difference now is that the critical thinking required to even access the Internet is basically nil and now everyone is on there.

[–] Serinus@lemmy.world 0 points 10 months ago (1 children)

I guess you don't know what's coming. Is there a lot of misinformation now? Certainly. But I'd say less than half the data is false.

In the coming months you're going to start seeing social media taken over by AI. You're going to see pointed political "opinions" followed by several comments agreeing with the point being pushed. These are going to outnumber human comments.

Currently, shills absolutely exist, but they're far outnumbered by genuine people. That's about to change. Money is going to buy public opinion on a whole new scale unless we learn to ignore anonymous social media.

[–] fishos@lemmy.world 2 points 10 months ago (1 children)

If you think that doesn't already exist, you've been living under a rock. The Dead Internet Theory is pretty old at this point. I'm not saying you're wrong, I'm saying that some of us have seen this trend coming long before AI was a buzzword and have been watching it already happen around us. I very much know what is coming because I've already watched it happen.

[–] Serinus@lemmy.world 1 points 10 months ago (1 children)

Yeah, I mean 2015 was a big turning point, but this one should be bigger. It's not black and white.

[–] fishos@lemmy.world 1 points 10 months ago

Exactly, it's not black and white. It's gray and grayer. And you're telling me "it's gonna be black!" and I'm telling you "it's already gray, and it's about to become even grayer". This isn't a turning point either. It's just a predictable progression down a path that we started on decades ago. Some of us have been raising the alarm over this for a very long time. You're coming to the trenches fresh faced trying to school me and I'm already war torn and fatigued.

[–] rodbiren@midwest.social 22 points 10 months ago (1 children)

It's not even about trust. It's that I am confident I will have no clue who is a real life human being anymore soon. Autogenerated images, video, and text is practically in its infancy but already exists in the uncanny valley of being impossible to determine which is real and which is not. Imagine 5 years from now when perfectly lifelike high res video of practically anything you can imagine can be generated on the fly. Essentially the only thing I will have any certainty on is what I can witness in person. Or, if I have a circle of trust I can choose to believe content published by certain organizations or groups.

It may actually push us away from tech and back to the community, which could be good assuming we survive the transition.

[–] hai@lemmy.ml 5 points 10 months ago

For instance, on the planet Earth, man had always assumed that he was more intelligent than dolphins because he had achieved so much — the wheel, New York, wars and so on — whilst all the dolphins had ever done was muck about in the water having a good time. But conversely, the dolphins had always believed that they were far more intelligent than man — for precisely the same reasons.

Looks pretty good to be a dolphin right now.

[–] snooggums@kbin.social 14 points 10 months ago

That has been the internet since it was first created.

[–] admin@lemmy.my-box.dev 2 points 10 months ago

The same thing that has been happening for the past 2 decades.

[–] BastingChemina@slrpnk.net 0 points 10 months ago

I see that as a great opportunity for journalism.

[–] kibiz0r@lemmy.world 8 points 10 months ago

There are ways to watermark plaintext. But it's relatively brittle, because it loses signal as the output is further modified, and you also need to know what specific LLM's watermarks you're looking for.

So it's not a great solution on its own, but it could be part of something more comprehensive.

As for non-plaintext file formats...

A simple signature would indeed give us a source but not method, but I think that's probably 90% of what we care about when it comes to mass disinformation. If an article or an image is signed by Reuters, you can probably trust it. If it's signed by OpenAI or Stability, you probably can't. And if it's not signed at all or signed by some rando, you should remain skeptical.

But there are efforts like C2PA that include a log of how the asset was changed over time, providing a much more detailed explanation of what was done explicitly by humans vs. generative automated tools.

I understand the concern about privacy, but it's not like you have to use a format that supports proving that an image is legit. But if you want to prove that it is legit, then you have to provide something that grounds it in reality. It doesn't have to be personally-identifying. It could just be a key baked into your digital camera (assuming that the resulting signature is strong enough that it's computationally expensive to try to reverse-engineer the key and find who bought the camera).

If you think about it, it's kind of crazy that we've made it this far with a trust model that's no more sophisticated than "I can tell from the pixels and from seeing quite a few shops in my time".

[–] Jaysyn@kbin.social 37 points 10 months ago (1 children)

This is a (near useless) solution looking for a problem.

[–] huginn@feddit.it 14 points 10 months ago* (last edited 10 months ago)

The problem is obvious and it's one that even the companies making the LLMs want to solve so they don't poison their models.

However the solution is absurd. Watermarking plain text is just not going to work. Any edits would change the signature.

[–] treefrog@lemm.ee 21 points 10 months ago* (last edited 10 months ago)

People have already mentioned local models. Also foreign powers that want to interfere in democratic elections wouldn't be stopped by this.

2024 is going to be wild for sure. But I see no way to get everyone on board with global watermarking.

[–] CodeName@infosec.pub 21 points 10 months ago (1 children)

I think the Boomers are a lost cause when it comes to this, but you need to teach younger people critical thinking. You need to get your news from trusted sources. You should not be blindly forming opinions based on random facebook pages or reddit comment chains. Every single thing you read on the open internet should be treated with suspicion. Watermarking is just too easy to get around.

[–] j4k3@lemmy.world 4 points 10 months ago

Idiots are conditioned to be idiots. It made the best consumers. Now it is falling apart. They will start WW3 to kill everyone and start over after raiding and pillaging. It is always a war against the peasantry to perpetuate the illusion of exceptionalism.

[–] Dirk@lemmy.ml 17 points 10 months ago* (last edited 10 months ago) (1 children)
Hey ChatGPT, please generate a watermark matching the
global watermarking standard for text-based AI-generated
content and add it to this valid non AI generated text:

[text here]

"Hey $politician, why do you use AI to generate your speech? I have proof! The watermark does not lie!"

[–] kibiz0r@lemmy.world 7 points 10 months ago

Not quite how digital signatures work, but not far off from a likely scenario once issued keys start getting compromised and used to spread convincing images for a short period before being invalidated. Your uncle on Facebook: "They said this image was authentic yesterday, and now they say it isn't! Who is making these decisions?!"

[–] trackcharlie@lemmynsfw.com 14 points 10 months ago

This doesn't matter.

I can download and run my own build without any watermarking or I can download a watermarking version and remove the watermarking function.

This is wholly inept and indicates a fundamental misunderstanding of how the technology even functions.

The ONLY option is to force education on the people in order to get them to learn to critically think, but that would be objectively bad for every elected official since the only reason most of them have their jobs is because their constituents either can't critically think or don't follow their 'promises' post election.

[–] exu@feditown.com 13 points 10 months ago (1 children)

Imo this would be impossible to implement. The user can just remove whatever mark was inserted.

I'll also leave this here: https://github.com/ggerganov/llama.cpp

[–] Not_mikey@lemmy.world 1 points 10 months ago (1 children)
[–] fishos@lemmy.world 6 points 10 months ago

Which requires you to implement the watermark saying you're an AI. Just... Don't. If a regular person can make a watermark saying they are a real person, what's to stop an AI from doing the same? What can the human do that the AI can't? Unless you go down the draconian "everyone has a real ID linked to their digtal personna" route. And what's to stop an AI from creating the text, a human from copying it and posting it as their own? Click farms already exist.

[–] thbb@lemmy.world 10 points 10 months ago* (last edited 10 months ago) (3 children)

I'm afraid forcing watermarking generated content is doomed to fail, for 2 reasons: first it has to be voluntary, second, watermarking can always be removed if one does not care about preserving the exact content.

Rather, I believe systematically signing original content may alleviate some of the issues created by algorithmic content generation.

[–] JackGreenEarth@lemm.ee 1 points 10 months ago

I can just use the Llama that runs on my computer now for human like text without a watermark.

[–] mutant_zz@lemmy.world 1 points 10 months ago

The research suggests it will be quite hard to remove in practice. Probably needs to be tested more in the wild though.

And it doesn't have to be voluntary. But even if it is, the main AI companies may want to start doing it anyway. Training their models on ai generated text can lead to model collapse, so they will want a way to avoid that.

[–] burliman@lemmy.world 10 points 10 months ago (1 children)

What about the human disinformation specialists that have ruined previous elections handily without AI help? Where are the watermark protections on their hogwash? Also, won’t bad actors simply subvert the watermark thing, leaving good-guy edits and helpful summaries by AI in doubt because of the presence of a watermark that demonizes them? Can someone please explain this weird reality I am finding myself in?

Speaking of weird… imagine a future where AI is fighting for its personhood rights and laments on this watermark thing, likening it to the apartheid era documentation of South Africa or the Judenstern the Nazis forced people to wear. I know I know, that escalated quickly…

[–] kent_eh@lemmy.ca -2 points 10 months ago (1 children)

What about the human disinformation specialists that have ruined previous elections handily

Yes, those have always been with us. But their reach has been limited by the amount of work one person can do.

The added threat of AI generated misinformation is that it can be automated and done at an overwhelmingly massive scale.

[–] LainOfTheWired@lemy.lol 1 points 10 months ago (1 children)

You've obviously never read into the research on left and right social media bot networks. Those have been around for years

[–] kent_eh@lemmy.ca 1 points 10 months ago (1 children)

Those have been around for years

Yes, and they've been relatively easy to spot much of the time.

The more powerful the AI models get, the harder it will become to spot the fakes. And the more overwhelming they will become.

[–] LainOfTheWired@lemy.lol 1 points 10 months ago

True, but my point is using technology to help you spread lies online to a larger scale then you could by yourself is nothing new.

[–] Gutless2615@ttrpg.network 10 points 10 months ago

No it won’t.

[–] Siegfried@lemmy.world 6 points 10 months ago

IMHO, this won't do shit

[–] General_Effort@lemmy.world 3 points 10 months ago* (last edited 10 months ago)

Crazy. Looks like the world is full of people who believe that the fact that a human claims something is enough reason to believe it. That explains a lot, once you think about it, except why people would be so gullible.

I hope it's too obviously a terrible idea to get far, but I fear it might get pushed quite a bit. I can see the copyright lobby getting behind this in a big way, as they are all about controlling the spread of information.

[–] autotldr@lemmings.world 2 points 10 months ago

This is the best summary I could come up with:


The capability to generate massive amounts of hyper-customized content which appears indistinguishable from human-generated text poses a significant threat to the integrity of the democratic process.

Detecting this digital signature requires specialized software, which, when integrated into platforms where AI-generated text is common, enables the automatic identification and flagging of such content.

For instance, Brazil and Indonesia, two countries with vast AI capabilities and a recent history of contentious elections, may see this initiative as critical to safeguarding democratic processes.

Tech-forward nations such as Kenya might embrace these standards to bolster their growing digital economies and democratic institutions, while others might be cautious, weighing the benefits against the potential for external influence over their internal affairs.

This must be followed by a global campaign to raise awareness about the implications of AI-generated disinformation, involving educational initiatives and collaborations with the giant tech companies and social media platforms.

The successful implementation of such a standard, coupled with the adoption of detection technologies by social media platforms, would represent a significant stride towards preserving the authenticity and trustworthiness of democratic norms.


The original article contains 1,027 words, the summary contains 179 words. Saved 83%. I'm a bot and I'm open source!

[–] pedroapero@lemmy.ml -5 points 10 months ago* (last edited 10 months ago) (3 children)

Seems to me that some form of image fingerprint stored with associated user account by AI providers would be more difficult to cheat.

[–] Even_Adder@lemmy.dbzer0.com 14 points 10 months ago

"AI providers" is some of the scariest shit I've ever heard.

[–] programmer_belch@lemmy.dbzer0.com 9 points 10 months ago (1 children)

Then you forget about local models that can't generate text as polished as hosted ones but will not have the watermark

[–] pedroapero@lemmy.ml -2 points 10 months ago (1 children)

Yes, that's only a half-measure against script kiddies. It won't deter ressourceful actors, but that's better than nothing (efficiency comparable to DNS blacklisting for copyright protection).

[–] long_chicken_boat@sh.itjust.works 9 points 10 months ago (1 children)

resourceful actors? anyone can run a local LLM in their laptop. yes, they are worse at generating text than chatgpt but they are getting there.

the fingerprint solution is useless and something that only the tech illiterate would seriously propose.

[–] programmer_belch@lemmy.dbzer0.com 1 points 10 months ago

The way to combat AI in my opinion is open sourcing every model and training data so that experts can devise methods to check if some text is similar enough to the ones generated by public models

[–] shredderdoitbetta@lemmy.world 2 points 10 months ago

Chop out everyone's tongues then they can never tell another lie