this post was submitted on 13 Aug 2025
444 points (95.3% liked)

Technology

74104 readers
2945 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
(page 2) 50 comments
sorted by: hot top controversial new old
[–] Perspectivist@feddit.uk 10 points 3 days ago (3 children)

Ofcourse not. The issue with social media are the people. Algorithms just bring out the worst in us but it didn't make us like that, we already were.

[–] grrgyle@slrpnk.net 5 points 3 days ago (1 children)

From my point of view something that brings out the worst in us sounds like a really big part of the issue.

We've always been modified by our situations, so why not create better situations rather than lamenting that we don't have the grit to break through whatever toxic society we find ourselves graphed onto?

Sorry I know I'm putting a lot on your comment that I know you didn't mean, but I see this kind of unintentional crypto doomerism a lot. I think it holds people to an unhealthy standard.

[–] Perspectivist@feddit.uk 2 points 3 days ago

It is a big part of the issue, but as Lemmy clearly demonstrates, that issue doesn’t go away even when you remove the algorithm entirely.

I see it a lot like driving cars - no matter how much better and safer we make them, accidents will still happen as long as there’s an ape behind the wheel, and probably even after that. That’s not to say things can’t be improved - they definitely can - but I don’t think it can ever be “fixed,” because the problem isn’t it - it’s us. You can't fix humans by tweaking the code on social media.

load more comments (2 replies)
[–] avidamoeba@lemmy.ca 14 points 4 days ago* (last edited 4 days ago)

Uhm, I seem to recall that social media was actually pretty good in the late 2000s and early 2010s. The authors used AI models as the users. Could it be that their models have internalized the effects of the algorithms that fundamentally changed social media from what it used to be over a decade ago, and then be reproducing those effects in their experiments? Sounds like they're treating models as if they're humans, and they are not. Especially when it comes to changing behaviour based on changes in the environment, which is what they were testing by trying different algorithms and mitigation strategies.

[–] Zak@lemmy.world 12 points 3 days ago

The study is based on having LLMs decide to amplify one of the top ten posts on their timeline or share a news headline. LLMs aren't people, and the authors have not convinced me that they will behave like people in this context.

The behavioral options are restricted to posting news headlines, reposting news headlines, or being passive. There's no option to create original content, and no interventions centered on discouraging reposting. Facebook has experimented with limits to reposting and found such limits discouraged the spread of divisive content and misinformation.

I mostly use social media to share pictures of birds. This contributes to some of the problems the source article discusses. It causes fragmentation; people who don't like bird photos won't follow me. It leads to disparity of influence; I think I have more followers than the average Mastodon account. I sometimes even amplify conflict.

[–] tacosanonymous@mander.xyz 9 points 3 days ago

Neat.

Release the epstein files then burn it all down.

Social media was a mistake, tbh

[–] General_Effort@lemmy.world 9 points 4 days ago (1 children)

The original source is here:

https://arxiv.org/abs/2508.03385

Social media platforms have been widely linked to societal harms, including rising polarization and the erosion of constructive debate. Can these problems be mitigated through prosocial interventions? We address this question using a novel method – generative social simulation – that embeds Large Language Models within Agent-Based Models to create socially rich synthetic platforms. We create a minimal platform where agents can post, repost, and follow others. We find that the resulting following-networks reproduce three well-documented dysfunctions: (1) partisan echo chambers; (2) concentrated influence among a small elite; and (3) the amplification of polarized voices – creating a “social media prism” that distorts political discourse. We test six proposed interventions, from chronological feeds to bridging algorithms, finding only modest improvements – and in some cases, worsened outcomes. These results suggest that core dysfunctions may be rooted in the feedback between reactive engagement and network growth, raising the possibility that meaningful reform will require rethinking the foundational dynamics of platform architecture.

load more comments (1 replies)
[–] zeropointone@lemmy.world 8 points 4 days ago (3 children)

Fixing social media is like fixing guns so they can't hurt or kill anyone anymore. Both have been designed for a very particular purpose.

[–] paraphrand@lemmy.world 6 points 4 days ago (6 children)

Lemmy is social media. So is Mastodon. So is peer tube. And everything else in the fediverse.

So I wouldn’t compare social media to a gun, across the board.

load more comments (6 replies)
load more comments (2 replies)
[–] roguetrick@lemmy.world 6 points 4 days ago* (last edited 3 days ago)

Pre print journalism fucking bugs me because the journalists themselves can't actually judge if anything is worth discussing so they just look for click bait shit.

This methodology to discover what interventions do in human environments seems particularly deranged to me though:

We address this question using a novel method – generative social simulation – that embeds Large Language Models within Agent-Based Models to create socially rich synthetic platforms.

LLM agents trained on social media dysfunction recreate it unfailingly. No shit. I understand they gave them personas to adopt as prompts, but prompts cannot and do not override training data. As we've seen multiple times over and over. LLMs fundamentally cannot maintain an identity from a prompt. They are context engines.

Particularly concerning sf the silo claims. LLMs riffing on a theme over extended interactions because the tokens keep coming up that way is expected behavior. LLMs are fundamentally incurious and even more prone to locking into one line of text than humans as the longer conversation reinforces it.

Determining the functionality of what the authors describe as a novel approach might be more warranted than making conclusions on it.

[–] kibiz0r@midwest.social 5 points 3 days ago (1 children)

Because how to use it is baked into what it is. Like many big tech products, it’s not just a tool but also a philosophy. To use it is also to see the world through its (digital) eyes.

load more comments (1 replies)
[–] AceFuzzLord@lemmy.zip 3 points 3 days ago

I mean, I feel like just shutting it down would solve at least some problems. Shuttering it all, video sharing platforms included.

Not a situation most anyone would agree on, but it's an idea.

[–] Feyd@programming.dev 5 points 4 days ago

Let's just pretend nothing after MySpace ever happened

[–] General_Effort@lemmy.world 5 points 4 days ago (1 children)

I'm not surprised. I am surprised that the researchers were surprised, though.

Bridging algorithms seem promising.

The results were far from encouraging. Only some interventions showed modest improvements. None were able to fully disrupt the fundamental mechanisms producing the dysfunctional effects. In fact, some interventions actually made the problems worse. For example, chronological ordering had the strongest effect on reducing attention inequality, but there was a tradeoff: It also intensified the amplification of extreme content. Bridging algorithms significantly weakened the link between partisanship and engagement and modestly improved viewpoint diversity, but it also increased attention inequality. Boosting viewpoint diversity had no significant impact at all.

load more comments (1 replies)
[–] Cocopanda@lemmy.world 3 points 3 days ago

Getting banned from Facebook. After a decade of clapping back against racists. Has been the best thing in my life. So glad to be out of there. Just wish I could have saved my pics first.

load more comments
view more: ‹ prev next ›