this post was submitted on 03 Jan 2024
141 points (87.3% liked)

Technology

59605 readers
3438 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
 

"To prevent disinformation from eroding democratic values worldwide, the U.S. must establish a global watermarking standard for text-based AI-generated content," writes retired U.S. Army Col. Joe Buccino in an opinion piece for The Hill. While President Biden's October executive order requires watermarking of AI-derived video and imagery, it offers no watermarking requirement for text-based content. "Text-based AI represents the greatest danger to election misinformation, as it can respond in real-time, creating the illusion of a real-time social media exchange," writes Buccino. "Chatbots armed with large language models trained with reams of data represent a catastrophic risk to the integrity of elections and democratic norms."

Joe Buccino is a retired U.S. Army colonel who serves as an A.I. research analyst with the U.S. Department of Defense Defense Innovation Board. He served as U.S. Central Command communications director from 2021 until September 2023. Here's an excerpt from his report:

Watermarking text-based AI content involves embedding unique, identifiable information -- a digital signature documenting the AI model used and the generation date -- into the metadata generated text to indicate its artificial origin. Detecting this digital signature requires specialized software, which, when integrated into platforms where AI-generated text is common, enables the automatic identification and flagging of such content. This process gets complicated in instances where AI-generated text is manipulated slightly by the user. For example, a high school student may make minor modifications to a homework essay created through Chat-GPT4. These modifications may drop the digital signature from the document. However, that kind of scenario is not of great concern in the most troubling cases, where chatbots are let loose in massive numbers to accomplish their programmed tasks. Disinformation campaigns require such a large volume of them that it is no longer feasible to modify their output once released.

The U.S. should create a standard digital signature for text, then partner with the EU and China to lead the world in adopting this standard. Once such a global standard is established, the next step will follow -- social media platforms adopting the metadata recognition software and publicly flagging AI-generated text. Social media giants are sure to respond to international pressure on this issue. The call for a global watermarking standard must navigate diverse international perspectives and regulatory frameworks. A global standard for watermarking AI-generated text ahead of 2024's elections is ambitious -- an undertaking that encompasses diplomatic and legislative complexities as well as technical challenges. A foundational step would involve the U.S. publicly accepting and advocating for a standard of marking and detection. This must be followed by a global campaign to raise awareness about the implications of AI-generated disinformation, involving educational initiatives and collaborations with the giant tech companies and social media platforms.

In 2024, generative AI and democratic elections are set to collide. Establishing a global watermarking standard for text-based generative AI content represents a commitment to upholding the integrity of democratic institutions. The U.S. has the opportunity to lead this initiative, setting a precedent for responsible AI use worldwide. The successful implementation of such a standard, coupled with the adoption of detection technologies by social media platforms, would represent a significant stride towards preserving the authenticity and trustworthiness of democratic norms.

Exerp credit: https://slashdot.org/story/423285

you are viewing a single comment's thread
view the rest of the comments
[–] exu@feditown.com 13 points 10 months ago (2 children)

Imo this would be impossible to implement. The user can just remove whatever mark was inserted.

I'll also leave this here: https://github.com/ggerganov/llama.cpp

[–] Not_mikey@lemmy.world 1 points 10 months ago (1 children)
[–] fishos@lemmy.world 6 points 10 months ago

Which requires you to implement the watermark saying you're an AI. Just... Don't. If a regular person can make a watermark saying they are a real person, what's to stop an AI from doing the same? What can the human do that the AI can't? Unless you go down the draconian "everyone has a real ID linked to their digtal personna" route. And what's to stop an AI from creating the text, a human from copying it and posting it as their own? Click farms already exist.