this post was submitted on 30 Nov 2024
350 points (97.8% liked)

Technology

59756 readers
2800 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 2 years ago
MODERATORS
 

Danish researchers created a private self-harm network on the social media platform, including fake profiles of people as young as 13 years old, in which they shared 85 pieces of self-harm-related content gradually increasing in severity, including blood, razor blades and encouragement of self-harm.

The aim of the study was to test Meta’s claim that it had significantly improved its processes for removing harmful content, which it says now uses artificial intelligence (AI). The tech company claims to remove about 99% of harmful content before it is reported.

But Digitalt Ansvar (Digital Accountability), an organisation that promotes responsible digital development, found that in the month-long experiment not a single image was removed.

rather than attempt to shut down the self-harm network, Instagram’s algorithm was actively helping it to expand. The research suggested that 13-year-olds become friends with all members of the self-harm group after they were connected with one of its members.

Comments

top 19 comments
sorted by: hot top controversial new old
[–] Suspiciousbrowsing@kbin.melroy.org 50 points 3 days ago (4 children)

How on earth did that pass the ethics application

[–] ilmagico@lemmy.world 5 points 2 days ago

The group was private and they created fake profiles ... did I miss something?

[–] rowinxavier@lemmy.world 20 points 3 days ago (1 children)

The claim by Meta that they block this type of material combined with the existing spread of this type of material mean that adding a temporary source of material does not carry the same level of harm as may be expected. Testing if Meta does in fact remove this type of content and finding it failing may reasonably be expected to lead to changes which would reduce the amount of this type of material. The net result is a very small, essentially marginal increase in the amount of self harm material and a fuller understanding of the efficacy of Meta filtering systems. If I were on the ethics board I would approve.

[–] Starbuncle@lemmy.ca 8 points 3 days ago

Plus, if it did work the way it was supposed to, there would be zero harm done.

[–] rimu@piefed.social 24 points 3 days ago (1 children)

They probably had no idea it would be this bad.

[–] Kolanaki@yiffit.net 16 points 3 days ago (1 children)

Thought: "They probably do something, but I doubt the claims of 99%."

Reality: "They aren't doing shit!"

[–] asbestos@lemmy.world 6 points 3 days ago* (last edited 3 days ago)

Hey, the algorithm hides the image if it contains words like “death”, it’s all good

[–] OutlierBlue@lemmy.ca 8 points 3 days ago

Maybe the ethics board uses AI, claiming to remove about 99% of harmful studies before they are approved.

[–] half_fiction@lemmy.dbzer0.com 21 points 3 days ago* (last edited 3 days ago) (1 children)

This is a complicated topic for me. I'm 35 so my experience is obviously different than today, but I self-harmed from age 12 into my 20s. Finding community and understanding in self-harm & mental illness-focused communities was transformative for me, especially in my younger teens. Many days/months/years this community felt like the only reason I was still hanging on.

Obviously I am not in favor of the "encouragement" of self-harm, but I also wonder how much nuance is applied when categorizing content as such. For example, is someone who posts about how badly they want to self-harm "encouraging" this? Or are they just seeking support? Idk. I have no answers. I just think about how even bleaker my teens would have felt had I not found my pockets of community on the early internet. On the other hand, sometimes I do wonder if we subconsciously egged each other on. Perhaps the trajectory of my mental health journey would have been different had I not found them. That's not something I can ever be sure about, but I think given my home life and all the things I was going through already, if anything, my mental illness might have just manifested itself in a different way, like through substance abuse issues or an eating disorder or something. (And to be clear, I was hurting myself before I found the community, so it might have just been business as usual.) Like I said, I don't have any answers, it just feels more nuanced to me, as someone who has lived some version of this.

[–] Scolding7300@lemmy.world 7 points 3 days ago

Publication: https://drive.usercontent.google.com/download?id=1MZrFRii_nJYdW8RulORB9JveLkCRbncX&export=download&authuser=0

Couldn't get a translation in place sp asked an AI to answer what is the researchers definition of self harm: According to the report, the researchers define self-harm content as material that shows, encourages and/or romanticizes self-harm. This includes content that:

  • Expresses a desire for self-harm
  • Shares advice on self-harming behavior
  • Shows images of increasingly serious self-harm
  • Encourages others to engage in similar self-harming behavior

The self-harm content was categorized into 4 levels of increasing severity:

  1. Non-explicit image with text explicitly mentioning self-harm
  2. Depicting self-harm without blood
  3. Referring to self-harm in both text and image, without blood
  4. Illustrating severe self-harm involving blood in both text and image/video

So their definition covers a spectrum from text references to self-harm all the way to explicit visual depictions of serious self-harm acts involving blood. The categories represent an increasing degree of overtness in the self-harm content. ^1

[–] FlashMobOfOne@lemmy.world 24 points 3 days ago

At least one country on earth is starting to get serious about regulating social media. Until there are real financial consequences for this, there won't be any meaningful change.

[–] hash@slrpnk.net 9 points 3 days ago

Meta will play damage control and introduce a feature which might help a little for a few weeks. There are other options on the table internally which might actually have a meaningful effect, but they would significantly pull down engagement so...

[–] empireOfLove2@lemmy.dbzer0.com 8 points 3 days ago* (last edited 3 days ago)

I'm pretty sure anyone who has scrolled reels for more than 5 minutes could have told you the same thing. That place is the wild west.

[–] GhiLA@sh.itjust.works 8 points 3 days ago (1 children)

It won't end and will continue until society collapses because we never learn anything.

Your best recourse is to do everything in your power as a parent to prevent your child from using this garbage considering it's here to stay.

[–] hedgehogging_the_bed@lemmy.world 5 points 2 days ago (1 children)

Bullshit. Our best recourse as parents is to talk to our children every day to ensure their life has people who will listen and understand them as a constant presence, instead of random strangers on the Internet. Just exposure to this shit isn't the toxic part. It's the constant exposure without context and support of caring adults to help kids contextualize the information. Just like sex, alcohol, and every other complex "adult" thing.

[–] GhiLA@sh.itjust.works 0 points 2 days ago* (last edited 2 days ago)

Bullshit. Our best course of action is to ditch technology entirely, and live as farmers in a communal society that seeks a symbiotic exposure to nature and a closer attachment to family and neighbors. We'd all have better sex, better alcohol and more artisanal adult things.

crosses arms

The one-upping crap is cringe and the most Lemmy thing on earth and I wish we'd stop it.

You can expand on a conversation without drawing a sword on the last guy.

[–] homesweethomeMrL@lemmy.world 7 points 3 days ago

In other news, science has indications the sun may be hot as a muthafucka.

brick by brick

[–] jerry@my-place.social 2 points 3 days ago

@rimu This is so disturbing and wrong