Danish researchers created a private self-harm network on the social media platform, including fake profiles of people as young as 13 years old, in which they shared 85 pieces of self-harm-related content gradually increasing in severity, including blood, razor blades and encouragement of self-harm.
The aim of the study was to test Meta’s claim that it had significantly improved its processes for removing harmful content, which it says now uses artificial intelligence (AI). The tech company claims to remove about 99% of harmful content before it is reported.
But Digitalt Ansvar (Digital Accountability), an organisation that promotes responsible digital development, found that in the month-long experiment not a single image was removed.
rather than attempt to shut down the self-harm network, Instagram’s algorithm was actively helping it to expand. The research suggested that 13-year-olds become friends with all members of the self-harm group after they were connected with one of its members.
Comments
Bullshit. Our best recourse as parents is to talk to our children every day to ensure their life has people who will listen and understand them as a constant presence, instead of random strangers on the Internet. Just exposure to this shit isn't the toxic part. It's the constant exposure without context and support of caring adults to help kids contextualize the information. Just like sex, alcohol, and every other complex "adult" thing.
Bullshit. Our best course of action is to ditch technology entirely, and live as farmers in a communal society that seeks a symbiotic exposure to nature and a closer attachment to family and neighbors. We'd all have better sex, better alcohol and more artisanal adult things.
crosses arms
The one-upping crap is cringe and the most Lemmy thing on earth and I wish we'd stop it.
You can expand on a conversation without drawing a sword on the last guy.