this post was submitted on 26 Aug 2024
342 points (96.7% liked)

Technology

59534 readers
3168 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] floofloof@lemmy.ca 1 points 2 months ago (3 children)

Maybe this will become a major driver for the improvement of AI watermarking and detection techniques. If AI companies want to continue sucking up the whole internet to train their models on, they'll have to be able to filter out the AI-generated content.

[–] silence7@slrpnk.net 1 points 2 months ago (2 children)

"filter out" is an arms race, and watermarking has very real limitations when it comes to textual content.

[–] floofloof@lemmy.ca 1 points 2 months ago (1 children)

I'm interested in this but not very familiar. Are the limitations to do with brittleness (not surviving minor edits) and the need for text to be long enough for statistical effects to become visible?

[–] silence7@slrpnk.net 2 points 2 months ago

Yes — also non-native speakers of a language tend to follow similar word choice patterns as LLMs, which creates a whole set of false positives on detection.