Capricorn_Geriatric

joined 1 year ago
[–] Capricorn_Geriatric@lemmy.world 16 points 1 month ago (5 children)

Not neccessarily. A spun off YouTube would still have YouTube premium and ad revenue. They could also sell user data to 3rd parties (I doubt Google currently does it on a large since it's in their interest to have a better ad network than its competitiors). A move similar to Reddit's with their API and exclusive search agreement or agreements to feed certain videos to AI would both fetch a higher price and upset the quality less since the vast majorty of videos watched are found through YouTube itself.

[–] Capricorn_Geriatric@lemmy.world 10 points 1 month ago* (last edited 1 month ago) (1 children)

Also, the collar may cause slight discomfort including (but not limited to) itching, rashes, choking, rashes and llergic reactions). For such cases, we have technitians availiable in 20+ of the world's largest cities to help you alleviate the symptoms! (You'll have to get an appointment through a fake AI robocall first)

T&CAny attempt to touch the collar by a person not wearing it will cause the collar to start burning the flesh of both the toucher and wearer. When the wesrer wishes to use Adobe Elements, they have to plug in their collar into the computer. Only the wearer may touch the wire of the collar - any attempt by a 3rd party to touch the collar will cause a 80dB screeching noise to be emitted by the collar. Any complains must be arbitrated. We will not budge like those pussies over at Disney. If you're an EU citizen you have to renounce your citizenship if you wish to use Adobe products. Our products may onle be used in progressive democracies with strong corporate freedom of forced arbitration.

SpoilerTbh I think I sold them way too short since their agreement would be at least 35000 words long

Wasn't there a N64 Pokemon game (Pokemon Safari?) Where you take photos of pokemon?

I guess Nintendo quashed its own patent.

Agreed.

I didn't listen to the podcast so I wouldn't know, but honestly, she was lucky. She's popular and her publishers had an interest in the case (they'd lose out on profits if she lost). And she initially did lose. It was only because of the publicity of the case that it was overruled (although money did help as well).

Unfortunately, this could've happened to any smaller artist, and it routinely happens with patent trolls I pointed to. Unfortunately, I don't have a lawsuit I can point to, but given the volume, one surely exists.

Also, it's not as if I approve of the current state of copyright in the US (or EU for that matter).

Originally copyright was meant to protect rights of the author, but in time it was bastardised into the concept we have today where artist sign off their rights to publishers.

So my proposal is - if corporations like copyright, let them have it. I won't watch Disney movies outside of Disney+ ors the system we've got and have to live with, why not let the corporatios feel it as well?

Why would Google, which makes loads of money from those demonetizations on one side of the law now be allowed to use copyrighted works of others for profit, while Internet users in the US get a fine or their service cut for alleged copright infringement while those in Germany get a stern letter with a big fake fine?

Big Tech shouldn't get to profit both from the false copyright infringement claims as well as getting to use the actual copyrighted content to generate a profit.

This whole AI copyright situation is just a symptom of an ailing global copyright policy that needs to be fixed, and slapping an AI-free-for-all band-aid on top isn't a fix.

My train of thought is this: If we don't let a simple AI exceotion into the books, either training AI on copyrighted content stays illegal, or the entire system gets a reimagining.

If it stays the same, this will not mean much. Piracy sites and torrenting exists despite the current state of copyright law. I don't see why AI could't exist in this way. This has the huge plus of keeping AI outside the hands of Big Tech. Hopefully this also means it's harder for harmful uses of AI to be legal.

Alternatively, we get a better copyright system for everyone, assuming it isn't made to only benefit the corporations.

[–] Capricorn_Geriatric@lemmy.world 15 points 2 months ago (5 children)

Those claiming AI training on copyrighted works is "theft" misunderstand key aspects of copyright law and AI technology. Copyright protects specific expressions of ideas, not the ideas themselves.

Sure.

When AI systems ingest copyrighted works, they're extracting general patterns and concepts - the "Bob Dylan-ness" or "Hemingway-ness" - not copying specific text or images.

Not really. Sure, they take input and garble it up and it is "transformative" - but so is a human watching a TV series on a pirate site, for example. Hell, it's eduactional is treated as a copyright violation.

This process is akin to how humans learn by reading widely and absorbing styles and techniques, rather than memorizing and reproducing exact passages.

Perhaps. (Not an AI expert). But, as the law currently stands, only living and breathing persons can be educated, so the "educational" fair use protection doesn't stand.

The AI discards the original text, keeping only abstract representations in "vector space". When generating new content, the AI isn't recreating copyrighted works, but producing new expressions inspired by the concepts it's learned.

It does and it doesn't discard the original. It isn't impossible to recreate the original (since all the data it gobbled up gets stored somewhere in some shape or form and can be truthfully recreated, at least judging by a few comments bellow and news reports). So AI can and does recreate (duplicate or distribute, perhaps) copyrighted works.

Besides, for a copyright violation, "substantial similarity" is needed, not one-for-one reproduction.

This is fundamentally different from copying a book or song.

Again, not really.

It's more like the long-standing artistic tradition of being influenced by others' work.

Sure. Except when it isn't and the AI pumps out the original or something close enoigh to it.

The law has always recognized that ideas themselves can't be owned - only particular expressions of them.

I'd be careful with the "always" part. There was a famous case involving Katy Perry where a single chord was sued over as copyright infringement. The case was thrown out on appeal, but I do not doubt that some pretty wild cases have been upheld as copyright violations (see "patent troll").

Moreover, there's precedent for this kind of use being considered "transformative" and thus fair use. The Google Books project, which scanned millions of books to create a searchable index, was ruled legal despite protests from authors and publishers. AI training is arguably even more transformative.

The problem is that Google books only lets you search some phrase and have it pop up as beibg from source xy. It doesn't have the capability of reproducing it (other than maybe the page it was on perhaps) - well, it does have the capability since it's in the index somewhere, but there are checks in place to make sure it doesn't happen, which seem to be yet unachieved in AI.

While it's understandable that creators feel uneasy about this new technology, labeling it "theft" is both legally and technically inaccurate.

Yes. Just as labeling piracy as theft is.

We may need new ways to support and compensate creators in the AI age, but that doesn't make the current use of copyrighted works for AI training illegal or

Yes, new legislation will made to either let "Big AI" do as it pleases, or prevent it from doing so. Or, as usual, it'll be somewhere inbetween and vary from jurisdiction to jurisdiction.

However,

that doesn't make the current use of copyrighted works for AI training illegal or unethical.

this doesn't really stand. Sure, morals are debatable and while I'd say it is more unethical as private piracy (so no distribution) since distribution and disemination is involved, you do not seem to feel the same.

However, the law is clear. Private piracy (as in recording a song off of radio, a TV broadcast, screen recording a Netflix movie, etc. are all legal. As is digitizing books and lending the digital (as long as you have a physical copy that isn't lended out as the same time representing the legal "original"). I think breaking DRM also isn't illegal (but someone please correct me if I'm wrong).

The problems arises when the pirated content is copied and distributed in an uncontrolled manner, which AI seems to be capable of, making the AI owner as liable of piracy if the AI reproduced not even the same, but "substantially similar" output, just as much as hosts of "classic" pirated content distributed on the Web.

Obligatory IANAL and as far as the law goes, I focused on US law since the default country on here is the US. Similar or different laws are on the books in other places, although most are in fact substantially similar. Also, what the legislators cone up with will definately vary from place to place, even more so than copyright law since copyright law is partially harmonised (see Berne convention).

[–] Capricorn_Geriatric@lemmy.world 13 points 2 months ago (4 children)

Wait, wait, wait. Canva bought Affinity?!

[–] Capricorn_Geriatric@lemmy.world 56 points 2 months ago (5 children)

Wouldn't want to be mean to Facebook users, but the vast majority of them probably has micophone access enabled for Messenger at least, if not Facebook.

They haveb't stopped producing them... Yet. They're just planning to.

Greed.

Sure, they want you to run Win11, but chances are you're already running it, or at least Win10, so there's not much to gain there.

By making higher requirements for Win11 than neccessary Microsoft makes a killing on Windows licences.

OEMs have to pay Microsoft for keys. And for MS to make money off of keys, OEMs need to make more PCs. And how does MS force/incentivise them to do that? By 80% of the Win10 PCs incompatible with Win11.

Oh, and also, now they get to push their Copilot key as well.

Microsoft has a vested interest in PC sales not stagnating any more than they do, and sometimes it takes an artificial push to make that a reality.

[–] Capricorn_Geriatric@lemmy.world 10 points 3 months ago* (last edited 3 months ago)

Today, sure.

2005 was a different story, one the opposite of this one.

While Vista didn't have high specified requirements, it gobbled resources so updating from XP to Vista you'd have a noticable slowdown.

Win11 is the opposite of that story. While modern PC models (as in 5-year-old when Win11 first came out) can run Win11 fine, Microsoft forces requirements which aren't needed.

Sure, while having a better TPM and newer processor is a good thing, making anything other than that ewaste (because windows runs 90+% of consumer PCs, with Apple being the majority of the 10%) definitely isn't.

Depends on their methodology. Sure, a huge proportion of those are users who haven't heard of uBO, but we're forgetting a lot of caveats:

  1. Electron exists and lots of apps are built on top of it and identify as "Chrome". Judging by the numbers most have been weeded out, but some edge cases do visit more sites so they end up in the count.
  2. A lot of workplaces mandate the browser, which is often Chrome. This also gets counted.
  3. A not insignificant amount of Firefox users change their useragent to Chrome.

All of these skew the numbers towards Chrome. Some Chrome users use a different adblocker which lowers the uBO statistic.

[–] Capricorn_Geriatric@lemmy.world 3 points 3 months ago* (last edited 3 months ago)

"The browser built to be piping your data into our hands"

There, fixed.

view more: next ›