this post was submitted on 19 Sep 2024
489 points (99.6% liked)

Technology

59495 readers
3081 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
 

LinkedIn users in the U.S. — but not the EU, EEA, or Switzerland, likely due to those regions’ data privacy rules — have an opt-out toggle in their settings screen disclosing that LinkedIn scrapes personal data to train “content creation AI models.” The toggle isn’t new. But, as first reported by 404 Media, LinkedIn initially didn’t refresh its privacy policy to reflect the data use.

The terms of service have now been updated, but ordinarily that occurs well before a big change like using user data for a new purpose like this. The idea is it gives users an option to make account changes or leave the platform if they don’t like the changes. Not this time, it seems.

To opt out of LinkedIn’s data scraping, head to the “Data Privacy” section of the LinkedIn settings menu on desktop, click “Data for Generative AI improvement,” then toggle off the “Use my data for training content creation AI models” option. You can also attempt to opt out more comprehensively via this form, but LinkedIn notes that any opt-out won’t affect training that’s already taken place.

The nonprofit Open Rights Group (ORG) has called on the Information Commissioner’s Office (ICO), the U.K.’s independent regulator for data protection rights, to investigate LinkedIn and other social networks that train on user data by default.

“LinkedIn is the latest social media company found to be processing our data without asking for consent,” Mariano delli Santi, ORG’s legal and policy officer, said in a statement. “The opt-out model proves once again to be wholly inadequate to protect our rights: the public cannot be expected to monitor and chase every single online company that decides to use our data to train AI. Opt-in consent isn’t only legally mandated, but a common-sense requirement.”

all 37 comments
sorted by: hot top controversial new old
[–] ryper@lemmy.ca 91 points 2 months ago* (last edited 2 months ago) (3 children)

The terms of service have now been updated, but ordinarily that occurs well before a big change like using user data for a new purpose like this. The idea is it gives users an option to make account changes or leave the platform if they don’t like the changes. Not this time, it seems.

They should be required to delete their training data and start over after people have had a chance to opt in.

This isn't just in the US; I've got the setting in Canada and I'd assume it's in just about any country where LinkedIn is available that isn't on the very short list of exceptions.

[–] SeaJ@lemm.ee 28 points 2 months ago

But if they didn't get free resources, they wouldn't be profitable!

Sadly that is the excuse many AI companies use.

[–] FrostyPolicy@suppo.fi 8 points 2 months ago

I'm in the EU and that section in the settings isn't even there. I guess they aren't doing it here, for now at least. Probably due to GDPR.

[–] njordomir@lemmy.world 1 points 2 months ago

If we can't put the cat back in the bag, then the stolen data should be open-sourced and they should be regulated out of the AI space for a few years to allow companies not stealing data (I assume there are some) to catch up.

[–] Sineljora@sh.itjust.works 26 points 2 months ago (4 children)

What’s the best way to salt your LinkedIn profile to provide bad data for GenAI improvements?

[–] 4am@lemm.ee 22 points 2 months ago

Get chatGPT to write the field values for you

[–] aleq@lemmy.world 4 points 2 months ago

Just write some content without no soul and not a shred of humanity present. I.e. use the platform as intended.

[–] AceFuzzLord@lemm.ee 3 points 2 months ago* (last edited 2 months ago) (1 children)

Just copy every post you find on something like LinkedIn Lunatics. Spread them out over the course of a certain interval of time since you'd hope they have spam filters, making sure to repeat the same things and continue adding stuff to the list. Do this for as many accounts as you are able to make.

Enough people do it and hopefully their AI would be constantly saying batshit things, hopefully, assuming the accounts don't get blocked.

[–] Empricorn@feddit.nl 1 points 2 months ago (1 children)

Enough people do it and hopefully their AI would be constantly saying batshit things

How would you be able to tell the difference?

[–] AceFuzzLord@lemm.ee 1 points 2 months ago

I assume currently any AI they might be making would have some level of sanity to it and will give sane responses. So if enough people poison the data, the number of sane responses would probably go down if there's more LinkedIn Lunatic type responses in the training data than normal responses.

[–] pdxfed@lemmy.world 2 points 2 months ago

Clippy tips.

[–] friend_of_satan@lemmy.world 22 points 2 months ago (2 children)
[–] Appoxo@lemmy.dbzer0.com 8 points 2 months ago (2 children)

Lol they say (location is Germany ) it's not available in my region.

[–] pdxfed@lemmy.world 5 points 2 months ago

Because they have real data protections and MS knows it's illegal. Funny how actually enforced regulations protect citizens.

I think a lot of these companies have been going ham because they know something is coming in the US and they want to do all their pillaging while it's still technically legal and current, monetizable data.

[–] friend_of_satan@lemmy.world 3 points 2 months ago

Well fuck me haha

[–] alphacyberranger@sh.itjust.works 6 points 2 months ago

Thanks. I just checked the Advertising Preferences as well. Holy fuck the data they are collecting is insane.

[–] PrivacyDingus@lemmy.world 19 points 2 months ago (2 children)

We are about to witness the most unbearable chatbot of all time.

[–] Zip2@feddit.uk 6 points 2 months ago (1 children)

What happens when the AI is trained on stuff that was made up to begin with?

[–] PrivacyDingus@lemmy.world 3 points 2 months ago

Strangely it only tells the truth.

[–] thatsnothowyoudoit@lemmy.ca 6 points 2 months ago* (last edited 2 months ago) (1 children)

An ex-Google, ex-Apple, leadership chatbot focused on improving outcomes with data and cat memes, hustling 24/7.

[–] PrivacyDingus@lemmy.world 2 points 2 months ago

I think we might possibly see a kind of catastrophic event due to this quantity of hustling. The universe was not made for this.

[–] rsuri@lemmy.world 18 points 2 months ago

So when do I get my $0.34 settlement check

[–] perviouslyiner@lemmy.world 16 points 2 months ago

When your first notification about the change is a Mastodon post telling you to look for a pre-checked checkbox that wasn't there before...

[–] ryper@lemmy.ca 13 points 2 months ago* (last edited 2 months ago)

LinkedIn's blog post on this isn't at all apologetic, just "the privacy policy already let us do this but we've updated it to be clearer." I was expecting them to say something accidentally went live early or there was some other mistake. Nope, it's all according to plan. Fuck you LinkedIn.

[–] Kalkaline@leminal.space 10 points 2 months ago

Can't wait for this comment to be scrapped by AI umbrella quiet cannon delve.

[–] Appoxo@lemmy.dbzer0.com 10 points 2 months ago (1 children)

Common EU win lol.

I feel your you dear US citizens.

[–] Zip2@feddit.uk 1 points 2 months ago* (last edited 2 months ago) (1 children)

I feel your you dear US citizens.

What are you feeling of theirs?

[–] Appoxo@lemmy.dbzer0.com 2 points 2 months ago

Pity for getting shafted by the corpos

[–] General_Effort@lemmy.world 5 points 2 months ago (1 children)

How am I supposed to take seriously an article that misuses a basic term like "scraping"?

[–] hightrix@lemmy.world 3 points 2 months ago

Exactly my thought. LinkedIn didn’t scrape their own data.

[–] plasticmonkey@lemmy.world 3 points 2 months ago

^%##%^ you ^>#%^ ^%##%^ piece of =^%# ^>#%^ ass ^>#%^ piece of =^%# ^%##%^ linkedin piece of =**^%#

[–] Allonzee@lemmy.world -3 points 2 months ago

Much like Facebook, if you participate in LinkedIn, I question your judgement.