Yes, that's what I said. There are no "additional restrictions" from having a GPL license on something. The GPL license works by giving rights that weren't already present under the default copyright. You can reject the GPL on an open sourced piece of software if you want to, but then you lose the additional rights that the GPL gives you.
FaceDeer
I'd say it can be a problem because there have been examples of getting AIs to spit out entire copyrighted passages.
Examples that have turned out to either be a result of great effort to force the output to be a copy, a result of poor training techniques that result in overfitting, or both combined.
If this is really such a straightforward case of copyright violation, surely there are court cases where it's been ruled to be so? People keep arguing legality without ever referencing case law, just news articles.
Furthermore, some works can have additional restrictions on their use. I couldn't for example train an AI on Linux source code, have it spit out the exact source code, then slap my own proprietary commercial license on it to bypass GPL.
That's literally still just copyright. There's no "additional restrictions" at play here.
Learning what a character looks like is not a copyright violation. I'm not a great artist but I could probably draw a picture that's recognizably Mario, does that mean my brain is a violation of copyright somehow?
Yet evidence supports it, while you have presented none to support your claims.
I presented some, you actually referenced what I presented in the very comment where you're saying I presented none.
You can actually support your case very simply and easily. Just find the case law where AI training has been ruled a copyright violation. It's been a couple of years now (as evidenced by the age of that news article you dug up), yet all the lawsuits are languishing or defunct.
Very basically, yes. But the result is a model that doesn't actually contain the training data, it's too small for it to be physically possible.
Sure. But that's not what's happening when an AI is trained. It's not "stealing" the script or content of the video, it's analyzing them.
That article is over a year old. The NYT case against OpenAI turned out to be quite flimsy, their evidence was heavily massaged. What they did was pick an article of theirs that was widely copied across the Internet (and thus likely to be "overfit", a flaw in training that AI trainers actively avoid nowadays) and then they'd give ChatGPT the first 90% of the article and tell it to complete the rest. They tried over and over again until eventually something that closely resembled the remaining 10% came out, at which point they took a snapshot and went "aha, copyright violated!"
They had to spend a lot of effort to get that flimsy case. It likely wouldn't work on a modern AI, training techniques are much better now. Overfitting is better avoided and synthetic data is used.
Why do you think that of all the observable patterns, the AI will specifically copy "ideas" and "styles" but never copyrighted works of art?
Because it's literally physically impossible. The classic example is Stable Diffusion 1.5, which had a model size of around 4GB and was trained on over 5 billion images (the LAION5B dataset). If it was actually storing the images it was being trained on then it would be compressing them to under 1 byte of data.
AIs don't seem to be able to distinguish between abstract ideas like "plumbers fix pipes" and specific copyright-protected works of art.
This is simply incorrect.
This is the Daenerys case, for some reason it seems to be suddenly making the rounds again. Most of the news articles I've seen about it leave out a bunch of significant details so that it ends up sounding more of an "ooh, scary AI!" Story (baits clicks better) rather than a "parents not paying attention to their disturbed kid's cries for help and instead leaving loaded weapons lying around" story (as old as time, at least in America).
Don't make "profiteering AI companies" pay for UBI. Make all companies pay for UBI. Just tax their income and turn it around into UBI payments.
One of the major benefits of UBI is how simple it is. The simpler the system is the harder it is to game it. If you put a bunch of caveats on which companies pay more or pay less based on various factors, then there'll be tons of faffing about to dodge those taxes.
Copyright, yes it's a problem and should be fixed.
No, this is just playing into another of the common anti-AI fallacies.
Training an AI does not do anything that copyright is even involved with, let alone prohibited by. Copyright is solely concerned with the copying of specific expressions of ideas, not about the ideas themselves. When an AI trains on data it isn't copying the data, the model doesn't "contain" the training data in any meaningful sense. And the output of the AI is even further removed.
People who insist that AI training is violating copyright are advocating for ideas and styles to be covered by copyright. Or rather by some other entirely new type of IP protection, since as I said this is nothing at all like what copyright already deals with. This would be an utterly terrible thing for culture and free expression in general if it were to come to pass.
I get where this impulse comes from. Modern society has instilled a general sense that everything has to be "owned" by someone, even completely abstract things. Everyone thinks that they're owed payment for everything that they can possibly demand payment for, even if it's something that just yesterday they were doing purely for fun and releasing to the world without a care. There's this base impulse of "mine! Therefore I must control it!" Ironically, it's what leads to the capitalist hellscape so many people are decrying at the same time they demand more.
You don't see how one leads directly to the other? Full grown adults are the users of those corporations' products. If the corporations aren't allowed to put certain features in those products then that's the same as prohibiting their users from using those features.
Imagine if there was a government regulation that prohibited the sale of cars with red paint on them. They're not prohibiting an individual person from owning a car with red paint, they're not prohibiting individuals from painting their own cars red, but don't you think that'll make it a lot harder for individuals to get red cars if they want them?
You’re acting as if the bot had some sort of intention to help him.
No I'm not. I'm describing what actually happened. It doesn't matter what the bot's "intentions" were.
The larger picture here is that these news articles are misrepresenting the vents they're reporting on by omitting significant details.
The parents weren't paying attention to their obviously disturbed kid and they left a gun lying around for him to find. But sure, it was the chatbot that was the problem. Everything would have been perfectly fine forever without it.