this post was submitted on 02 Mar 2024
96 points (85.8% liked)

Fediverse

28465 readers
569 users here now

A community to talk about the Fediverse and all it's related services using ActivityPub (Mastodon, Lemmy, KBin, etc).

If you wanted to get help with moderating your own community then head over to !moderators@lemmy.world!

Rules

Learn more at these websites: Join The Fediverse Wiki, Fediverse.info, Wikipedia Page, The Federation Info (Stats), FediDB (Stats), Sub Rehab (Reddit Migration), Search Lemmy

founded 1 year ago
MODERATORS
 

The current state of moderation across various online communities, especially on platforms like Reddit, has been a topic of much debate and dissatisfaction. Users have voiced concerns over issues such as moderator rudeness, abuse, bias, and a failure to adhere to their own guidelines. Moreover, many communities suffer from a lack of active moderation, as moderators often disengage due to the overwhelming demands of what essentially amounts to an unpaid, full-time job. This has led to a reliance on automated moderation tools and restrictions on user actions, which can stifle community engagement and growth.

In light of these challenges, it's time to explore alternative models of community moderation that can distribute responsibilities more equitably among users, reduce moderator burnout, and improve overall community health. One promising approach is the implementation of a trust level system, similar to that used by Discourse. Such a system rewards users for positive contributions and active participation by gradually increasing their privileges and responsibilities within the community. This not only incentivizes constructive behavior but also allows for a more organic and scalable form of moderation.

Key features of a trust level system include:

  • Sandboxing New Users: Initially limiting the actions new users can take to prevent accidental harm to themselves or the community.
  • Gradual Privilege Escalation: Allowing users to earn more rights over time, such as the ability to post pictures, edit wikis, or moderate discussions, based on their contributions and behavior.
  • Federated Reputation: Considering the integration of federated reputation systems, where users can carry over their trust levels from one community to another, encouraging cross-community engagement and trust.

Implementing a trust level system could significantly alleviate the current strains on moderators and create a more welcoming and self-sustaining community environment. It encourages users to be more active and responsible members of their communities, knowing that their efforts will be recognized and rewarded. Moreover, it reduces the reliance on a small group of moderators, distributing moderation tasks across a wider base of engaged and trusted users.

For communities within the Fediverse, adopting a trust level system could mark a significant step forward in how we think about and manage online interactions. It offers a path toward more democratic and self-regulating communities, where moderation is not a burden shouldered by the few but a shared responsibility of the many.

As we continue to navigate the complexities of online community management, it's clear that innovative approaches like trust level systems could hold the key to creating more inclusive, respectful, and engaging spaces for everyone.

Related

top 50 comments
sorted by: hot top controversial new old
[–] brbposting@sh.itjust.works 27 points 8 months ago

Very interesting idea.

Immediate reaction is that the value of accounts listed for sale increase as their privileges increase. That’s not much different from the situation today where accounts are more valuable the older they are and the longer their post history. I would consider however the potential impact of a (highly) privileged account being turned over to a bad actor.

Great thoughts on a problem that deserves a solution!

PS: good news so far:

alt-text: Google search result showing “No results found for "buy lemmy account"”

[–] dfyx@lemmy.helios42.de 26 points 8 months ago (1 children)

This sounds a lot like what Stack Overflow does. And you know what people think about that community. It’s elitist and hostile towards newcomers and anyone who doesn’t know the convoluted rules for how to build their reputation.

If I want to interact with a community to ask a question or comment on something I found interesting but can’t because of rules that were made explicitly to keep newcomers from posting, I won’t stay and gain reputation. I leave.

Nobody says that about Discourse, perhaps they have implemented it better, and Discourse is the one I based the idea on.

[–] eatham@aussie.zone 19 points 8 months ago (1 children)

I think if this is implemented, it should be configurable for each community. On !perchance@lemmy.world for example, people posting there will often have just made a Lemmy account so they can ask for help. Not being able to post images would make this harder so they would probably just make a post in the old subreddit.

[–] otter@lemmy.ca 4 points 8 months ago* (last edited 8 months ago)

Same idea for local communities (ex. A university)

Sometimes the post is urgent, either for the user or as a PSA. I'd want to be able to configure how a system like this would work

[–] taaz@biglemmowski.win 15 points 8 months ago* (last edited 8 months ago) (1 children)

I think your idea is not necessarily wrong but it would be hard to get right, especially without making the entry into fediverse too painful for new (non-tech) people, I think that is still the number one pain point.

I have been thinking about moderation and spammers on fediverse lately too, these are some rough ideas I had:

  • Ability to set stricter/different rate-limits for new accounts - users older less then X can do only A actions per N seconds [1] (with better explained rate-limit message on the frontend side)
  • Some ability to not "fully" federate with too fresh instances (as a solution to note [1])
  • Abuse reputation from modlog/modlog sharing/modlog distribution (not really federation) - this one is tricky, the theory is that if you get many moderation actions taken against you your "goodwill reputation" lowers (nothing to do with upvotes) and some instances could preemptively ban you/take mod action, either through automated means or (better) the mods of other instances would have some kind of (easy) access to this information so that they can employ it in their decision.
    This has mostly nothing to do with bot spammers but instead with recurring problem makers/bad faith users etc.
    Though this whole thing would require some kinds of trust chains between instances, not easy development-wise (this whole idea could range from built-in algorithms taking in information like instance age, user count, user age and so on, to some kind of manual instance trust grading by admins).

~

All this together, I wouldn't be surprised if, in the future, there will eventually be some kinds of strata of instances, the free wild west with federate-to-any and the more closed in bubbles of instances (requiring some kind of entry process for other new instances).


[1] This does not solve the other problem with federation currently being block-list based instead of allow-list based (for good reasons).
One could write a few scripts/programs to simulate a federating instance and have tons of bots ready to go. While this exact scenario is probably not usual because most instances will defed. the domain the moment they detect bigger amount of spam, it could still be dangerous for the stability of servers - though I couldn't confirm if the lemmy federation api has any kind of limits, can't really imagine how that would be implemented if the federation traffic spikes a lot.

(Also in theory one could have a shit-ton of domains and subdomains prepared and just send tons spam from these ? Unless there are some limits already, afaik the only way to protect from this would be to switch to allow-list based federation.)

Lot of assumptions here so tell me if I am wrong!
Edit: Also sorry for kind of piggy-backing on your post OP, wanted to get this ideas out here finally

[–] GBU_28@lemm.ee 2 points 8 months ago

I've wondered about instance bombing before, it seems like a low success but high impact vector

[–] PeriodicallyPedantic@lemmy.ca 11 points 8 months ago

I think it's a good idea, but we have to be careful about the effect of malicious instances pumping up their own user reputations and lowering reputations of other users, maliciously.

Ideally these instances would be defederated, but sometimes it's just communities within an instance that it is problematic. There need to be solutions to this, as well as a way for the reputation system to retroactively change reputation upon defederation of an instance, or banning of a user.

[–] will_a113@lemmy.ml 10 points 8 months ago

Jimmy Wales (of Wikipedia fame) has been working on something like this for several years. Trust Cafe is supposed to gauge your trustworthiness based on other people who trust you, with a hand-picked team of top users monitoring the whole thing — sort of an enlightened dictatorship model. It’s still a tiny community and much of the tech has to be fleshed out more, but there are definitely people looking into this approach.

[–] ada@lemmy.blahaj.zone 9 points 8 months ago (1 children)

Misskey roles give the ability for admins to implement many of these suggestions. They don't federate, but they do allow for sandboxing of new users and gradual privilege escalation.

They're not perfect, and they're not as granular as they could be, but they're a really great first step for showing the potential of the idea.

[–] The_Lemmington_Post@discuss.online 2 points 8 months ago (1 children)

I'm surprised that only one platform in the fediverse has copied Discourse; they copy Reddit instead, with the biggest joke of a moderation system on the Internet.

[–] deweydecibel@lemmy.world 3 points 8 months ago* (last edited 8 months ago)

In what way? I saw you make the same claim on the other post you made.

Reddit moderation wasn't perfect but I'm still not understanding why you deem this the superior way when it doesn't seem to address the primary issue with reddit moderation: the people who were actually the mods.

I don't see how this system fundamentally fixes the problem of terrible individuals abusing authority. In fact to me it feels like it exacerbates it, by entrenching power users at the expense of everyone else, under the assumption they will somehow be more trustworthy and curate a healthier community just because they're there a lot.

That just sounds like a clubhouse, not an open community. You don't need to alter moderation on the fediverse as a whole to make a clubhouse, as plenty of instances have already shown.

[–] OpenStars@startrek.website 7 points 8 months ago

This is the model that Wikipedia uses and, while there are most definitely detractions, there are also significant benefits as well. Email spam filters too.

In one sense, it is a lot like irl democracy - with all the perks and pitfalls therein. For one it could lead to echo chamber reinforcement, though I don't think this one is a huge deal b/c so too can our current moderator setup, and if anything a trust system may be less susceptible, by virtue of spreading out the number of available "moderators" for each category of action?

The single greatest challenge I can think of to this working is that like democracy, it is vulnerable to outsider attack, wherein e.g. if someone could fake 100k bots to upvote a particular person's posts, they could in a fairly short time period elevate them to high status artificially. Perhaps this issue could be dealt with by performing a weighted voting scheme so that not all upvotes are equal, and e.g. an upvote from a higher-status account would count significantly more than an upvote from an account that is only a few hours old. Note that ofc this only reinforces the echo chamber issue all the more, b/c if you just join, how could you possibly hope to argue against a couple of people who have been on the platform for many years? The answer, ofc, is that you go elsewhere to start your own place, as is tradition. Which exasperates still further the issue of finding "good" places but... that is somewhat a separate matter, needing a separate solution in place for it (or maybe that is too naive of me to say?).

Btw the word "politics" essentially means "how we agree", and just as irl we are all going to have different ideas about how to achieve our enormous variety of goals, so too would that affect our preferences for social media. And at least at first, I would expect that many people may hate it, so I would hope that this would be made an opt-in feature by default.

Also, and for some reason I expect this next point to be quite unpopular, especially among some of the current moderators: we already have a system in place for distinguishing b/t good vs. bad content, or at least popular vs. unpopular - it is called "voting". I have seen some fairly innocuous replies get removed, citing "trolling" or some such, when someone dares to, get this, innocently ask a question, or perhaps state a known fact out-of-context (I know, sea-lioning exists too, I don't mean that). Irl someone might patiently explain why the other person was wrong or insensitive, or just ignore and move past it, but a mod feels a burden to clean up their safe spaces. So now I wonder, will this effect be exaggerated far further, and worse become capricious as a result? Personally I have had several posts that got perhaps 5 downvotes in the first few minutes, but then in the next few hours got >10-100x greater upvotes. So are the people looking at something RIGHT NOW more important than the 100 people that would look at it an hour from then? Even more tricky, what about the order that the votes are delivered in - would a post survive if the up- and down-voting were delivered more evenly, or like a person playing their hands at gambling, would their post get removed if it ever got too many losses in a row, thus preventing it from ever achieving whatever its true weight would have meant? If so, then people will aim to always talk in a "safe" manner, b/c nothing else would ever be allowed to be discussed, on the off-chance that someone (or 5 someones) could be offended by it (even if a hundred more studious people would have loved to have seen it, if they had been offered the chance - but being busier irl, were not offered the chance by the "winner take all" nature of social media posts, where they are either removed or they are not removed, there really is no middle ground... so far).

So to summarize that last point: mods can be fairly untrustworthy (I say this as a former one myself:-P), but so too can regular people, and since HARD removal takes away people's options to make up their own minds, why not leave most posts in and let voting do its work? Perhaps a label could be added, which users could select in their settings not to show "potentially controversial" material.

These are difficult and weighty matters to try to solve.

[–] soggy_kitty@sopuli.xyz 7 points 8 months ago (4 children)

This should be an overhaul of the moderation system in general. I find some communities have mods which ban based on disagreement alone regardless of any imposed rules.

I would like to see some kind of community review system on bans and their reasonings where the outcome can result in a punishment/demotion of the moderator who abused their power.

Yeah an appeal process to mitigate human bias would be nice.

[–] Blaze@reddthat.com 2 points 8 months ago

I've been through that indeed, not the most pleasant experience

load more comments (2 replies)
[–] snooggums@midwest.social 7 points 8 months ago (2 children)

There would definitely need to be a way to handle people switching instances, so that it doesn't encourage sticking to an instance that goes downhill or if an instance goes belly up. The former has a chance to transfer their identity, the latter wouldn't.

[–] lemmyingly@lemm.ee 2 points 8 months ago

You will then have instances that abuse the system by allowing their users to quickly rank high, so just they can infiltrate other instances with high ranking accounts.

[–] Blaze@reddthat.com 1 points 8 months ago (1 children)

Doesn't 0.19 already allow one click migration?

[–] snooggums@midwest.social 3 points 8 months ago (1 children)

How would someone migrate from a dead instance?

[–] Blaze@reddthat.com 4 points 8 months ago

Good point. Reminds me to keep a backup of my settings somewhere

[–] Docus@lemmy.world 6 points 8 months ago (1 children)

Interesting idea. But after thinking about it for a few minutes, i don’t think federated reputation would work for moderation privileges. Instances have their own rules, and i would not trust a hexbear mod to behave in line with lemmy.world rules and values. The same is true for communities really.

[–] technomad@slrpnk.net 1 points 8 months ago (1 children)

Would it still work for other privileges though?

[–] Docus@lemmy.world 2 points 8 months ago

Probably. On Reddit, some of it can managed at community (subreddit) level by bots automatically deleting posts or comments from recently joined people. Maybe a tiered system of mod privileges could work, where a junior mod can delete spam/offensive posts but not ban people. Mind you, banning people is not really effective in a fediverse where you can easily create new user accounts, on another instance.

[–] Crackhappy@lemmy.world 5 points 8 months ago

I am likely one of the moderators you don't like, as I choose to not take an active hand in moderation, and instead rely on community action. Whether that's through downvotes or reports, it's not important to me. I don't actively spend time hunting for rules violations. I'm not an activist mod, just one who wants to assist my chosen communities.

[–] Iceblade02@lemmy.world 4 points 8 months ago (1 children)

On a basic level, the idea of certain sandboxing, i.e image and link posting restrictions along with rate limits for new accounts and new instances is probably a good idea.

However, I do not think "super users" are a particularly good idea. I see it as preferrable that instances and communities handle their own moderation with the help of user reports - and some simple degree of automation.

An engaged user can already contribute to their community by joining the moderation team, and the mod view has made it significantly easier to have an overview of many smaller communities.

On a basic level, the idea of certain sandboxing, i.e image and link posting restrictions along with rate limits for new accounts and new instances is probably a good idea.

If there were any limits for new accounts, I'd prefer if the first level was pretty easy to achieve; otherwise, this is pretty much the same as Reddit, where you need to farm karma in order to participate in the subreddits you like.

However, I do not think “super users” are a particularly good idea. I see it as preferrable that instances and communities handle their own moderation with the help of user reports - and some simple degree of automation.

I don't see anything wrong with users having privileges; what I find concerning is moderators who abuse their power. There should be an appeal process in place to address human bias and penalize moderators who misuse their authority. Removing their privileges could help mitigate issues related to potential troll moderators. Having trust levels can facilitate this process; otherwise, the burden of appeals would always fall on the admin. In my opinion, the admin should not have to moderate if they are unwilling; their role should primarily involve adjusting user trust levels to shape the platform according to their vision.

An engaged user can already contribute to their community by joining the moderation team, and the mod view has made it significantly easier to have an overview of many smaller communities.

Even with the ability to enlarge moderation teams, Reddit relies on automod bots too frequently and we are beginning to see that on Lemmy too. I never see that on Discourse.

[–] 1984@lemmy.today 4 points 8 months ago (1 children)

So we are turning into full reddit now....

I very much doubt this kind of system would be implemented for Lemmy.

[–] SomeGuy69@lemmy.world 4 points 8 months ago* (last edited 8 months ago) (1 children)

I'd advocate training an AI on removed posts and using that as moderator tool. The moderator starts to approve false positives until the AI is getting more and more precise. You could even gray out comments first, the AI sees as potentially harmful and have user vot them being viewable. While almost certain harmful will be collapsed and definitely harmful will be blocked until approved. That way it's more gradually moderated and gives some power back to the users. Not sure how power and CPU intense this would be, maybe it could be shared between instances to load balance.

A motivational approach seams to be harmful to the Fediverse as it can be gamed and faked by bad actors and Lemmy instances are probably already larger than most discords. Discord is also pretty pointless with their unlock process, because 99% of the time, I've seen it be more of an obfuscation of where to find the "unlock" emoji to finally be able to chat. There's no voice chat here, so all you could limit is functionality on a featureless platform. What are you going to do, remove the ability to post from new users? This already sucked on Reddit and was a bandaid at best. I remember grinding AskReddit with every new account to get over that pointless karma level to finally be able to participate in my old accounts communities.

I think in a few years using an AI for this kind of task will be much more efficient and simpler to set up. Right now I think it would fail too much.

[–] chicken@lemmy.dbzer0.com 4 points 8 months ago (1 children)

I had an idea for a system sort of like this to reduce moderator burden. The idea would be for each user to have a score based on their volume and ratio of correct reports to incorrect reports (determined by whether it ultimately resulted in a moderator action) of rule breaking comments/posts. Content is automatically removed if the cumulative scores of people who have reported it is high enough. Moderators can manually adjust the scores of users if needed, and undo community mod actions. More complex rules could be applied as needed for how scores are determined.

To address the possibility that such a system would be abused, I think the best solution would be secrecy. Just don't let anyone know that this is how it works, or that there is a score attached to their account that could be gamed. Pretend it's a new kind of automod or AI bot or something, and have a short time delay between the report that pushes it over the edge and the actual removal.

[–] wahming@monyet.cc 9 points 8 months ago (2 children)

Functionality by obscurity does not work for a platform as open source and federated as lemmy

[–] chicken@lemmy.dbzer0.com 2 points 8 months ago (1 children)

I guess that's somewhat true if you are sharing an implementation around, but even avoiding the feature being widely known could make a difference. Even if it was known, I think the scoring could work alright on its own. A malicious removal could be quickly reversed manually and all reporters scores zeroed.

[–] wahming@monyet.cc 3 points 8 months ago (1 children)

Oh I'm not saying the feature couldn't work, and I like the idea. I'm just saying it wouldn't be possible to keep it a secret, for obvious reasons.

[–] fruitycoder@sh.itjust.works 2 points 8 months ago (1 children)

You could implement it and just tell client makers its not an intended data point to display or intentionally keep it less human readable (count in hex).

[–] wahming@monyet.cc 3 points 8 months ago (1 children)

I don't get it. It's open source. Anybody can just look at the code. Unless you're talking about a closed-source binary blob, in which case chances are nobody will ever adopt it.

[–] fruitycoder@sh.itjust.works 1 points 8 months ago

Its just to stop honest people. Not everything has to be illegal to be limited.

[–] threelonmusketeers@sh.itjust.works 1 points 8 months ago* (last edited 8 months ago) (1 children)

Just don't let anyone know that this is how it works, or that there is a score attached to their account that could be gamed.

does not work for a platform as open source and federated as lemmy

Even if the system and scores were fully open and public, would there even be a way to game such a system? How would that be done?

[–] GBU_28@lemm.ee 2 points 8 months ago (1 children)

Make a minimum viable instance to get federated.

Be normal for a while.

Boost bot users such that their scores are positive.

Use them for whatever mayhem you like

Could there be a way to protect against this? What if the scores were instance specific? If a user's score is super high on one (or a few) instances and super low on the rest, that could suggest malicious activity.

[–] haui_lemmy@lemmy.giftedmc.com 3 points 8 months ago

I really like the idea! One thing that irks ne a bit in general though is the fact that we dont talk about moderators being included in donations. Like barkeepers, the moderators should get a share of the donations. Especially helpful if the amount of donationshare stays the same (x%) across the team. If two people can share the load they dont need other 5 who are never there but get a share of the donations as well.

Just an idea, poke holes into if if you like but stay constructive please.

[–] brbposting@sh.itjust.works 2 points 8 months ago (2 children)

Got me thinking about the prompts that YouTube, TikTok, & Fortnite randomly give users.

Could help us understand how users feel about accounts overall or about reported comments/posts. Obviously can’t just do surveys for every comment though, they have to be somewhat rare.

alt-text: all 3 screenshots are similar; final is a prompt showing:

We want your feedback!
Overall, do you feel the addition of the Storm Flip to the game is:
Select a Rating
Very Negative 1 through 5 Very Positive

[–] snooggums@midwest.social 2 points 8 months ago (1 children)

Getting quizzed about unimportant stuff is the most annoying trend. Anything like that needs an "I don't want to give feedback" option at a minimum and an easy way to opt out of future feedback requests.

Yes I see the cancel button, but it isn't the same as "I don't want to give feedback" or "This is not important enough for me to care".

[–] brbposting@sh.itjust.works 2 points 8 months ago (1 children)

Good point.

Can imagine the surveys being opt-in. Folks who do opt in will know they’re doing a service to the community.

BTW: when you see a survey from a big tech company, you can expect it’s used to influence your algorithm - I wonder if it has more impact on your experience than others’

[–] snooggums@midwest.social 3 points 8 months ago

The constant nagging about donations is another. My doctor's office is in a health care organization that wants a survey after every single visit is another. Just constant nagging everything I spend money, and I know they don't actually use the feedback for anything except punishing employees. There is no point to give feedback when it won't be used to actually improve anything.

Now there is one organization I work with that does have 3rd party surveys that I know for a fact actually uses the feedback correctly. I jump at the chance to do those, and give it honestly and with comments!

I already give feedback here through upvotes and downvotes and user comments. Something about the UI might be something I would give feedback on, but things like moderation I would assume will either be ignored if it doesn't reinforce existing practices or used negatively based on past experience.

[–] soggy_kitty@sopuli.xyz 1 points 8 months ago

That's basically the upvote/downvote button.

[–] chiisana@lemmy.chiisana.net 2 points 8 months ago (2 children)

Good luck. The Lemmy devs took out total votes on profiles so people cannot see in aggregate how communities have viewed individual users in some lofty goal to be neutral. You’ll have to bend their arms backward for them to reintroduce some sort of aggregate score/trust.

I don't have any hope left for Lemmy in this regard, but hopefully, some other Fediverse projects, other than Misskey, will improve the moderation system. Reddit-style moderation is one of the biggest jokes on the Internet.

[–] UndercoverUlrikHD@programming.dev 1 points 8 months ago (1 children)

Lemmy.world might migrate from lemmy to sublinks once it's ready from what I've heard. There are alternatives to lemmy

load more comments (1 replies)
load more comments
view more: next ›