FaceDeer

joined 1 year ago
[–] FaceDeer@fedia.io 0 points 1 month ago (1 children)

Modern LLMs are trained on highly curated and processed data, often synthetic data based off of original posts and not the posts themselves. And the trainers are well aware that there are people trying to "poison" the data in various ways. At this point it's mainly an annoyance to other humans when people try.

[–] FaceDeer@fedia.io 0 points 1 month ago (1 children)

Again, they are not universally enforceable. There are plenty of jurisdictions where they are not.

[–] FaceDeer@fedia.io 0 points 1 month ago (1 children)

Which company us "the AI company?"

[–] FaceDeer@fedia.io -2 points 1 month ago (3 children)

The enforceability of EULAs varies with jurisdiction and with the actual contents of the EULA. It's by no means a universally accepted thing.

It's funny how suddenly large chunks of the Internet are cheering on EULAs and copyright enforcement by giant megacorporations because they've become convinced that AI is Satan.

[–] FaceDeer@fedia.io -4 points 1 month ago (2 children)

If it's paywalled how did they access it?

[–] FaceDeer@fedia.io -4 points 1 month ago (1 children)

The problem with those things is that the viewer doesn't need that license in order to analyze them. They can just refuse the license. Licenses don't automatically apply, you have to accept them. And since they're contracts they need to offer consideration, not just place restrictions.

An AI model is not a derivative work, it doesn't include any identifiable pieces of the training data.

[–] FaceDeer@fedia.io 30 points 1 month ago (3 children)

So charge them an appropriate price for the scarce resource they're using.

view more: ‹ prev next ›