this post was submitted on 10 Nov 2024
83 points (97.7% liked)

Selfhosted

40296 readers
358 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 1 year ago
MODERATORS
 

If you think this post would be better suited in a different community, please let me know.


Topics could include (this list is not intending to be exhaustive — if you think something is relevant, then please don't hesitate to share it):

  • Moderation
  • Handling of illegal content
  • Server structure (system requirements, configs, layouts, etc.)
  • Community transparency/communication
  • Server maintenance (updates, scaling, etc.)

Cross-posts

  1. https://sh.itjust.works/post/27913098
top 50 comments
sorted by: hot top controversial new old
[–] walden@sub.wetshaving.social 39 points 1 week ago (7 children)

We require applications, and most applications we get are extremely low effort and we don't approve them. If you have open registrations you'll be doing a lot of moderation for spam.

Run the software that scans images for CSAM. It's not perfect but it's something. If your instance freely hosts whatever without any oversight, word will spread and all of a sudden you're hosting all sorts of bad stuff. It's not technically illegal if you don't know about it, but I personally don't want anything to do with that.

[–] Dave@lemmy.nz 15 points 1 week ago (1 children)

I will add that if you have open registrations you will be a target for spam and trolls, and if you don't take quick action then some other instances are likely to defederate from your instance.

This depends on the instance, some will have a low tolerance and defederate pretty quickly, some instances will defederate temporarily until the spammers or trolls move to a different instance, and some won't care. But you likely won't know it's happened unless you notice you aren't getting content from that instance anymore.

One other thing is that if you're going to run an instance and aren't already on Matrix, make an account. It's how instance admins tend to keep in contact with each other.

[–] Kalcifer@sh.itjust.works 9 points 1 week ago* (last edited 1 week ago)

[...] if you’re going to run an instance and aren’t already on Matrix, make an account. It’s how instance admins tend to keep in contact with each other.

This is good advice.

[–] Kalcifer@sh.itjust.works 12 points 1 week ago (1 children)

Run the software that scans images for CSAM.

Which software is that?

[–] walden@sub.wetshaving.social 16 points 1 week ago (2 children)

It's called Lemmy-Safety of Fedi-Safety depending on where you look.

One thing to note, I wasn't able to get it running on a VPS because it requires some sort of GPU.

[–] Kalcifer@sh.itjust.works 10 points 1 week ago (15 children)

One thing to note, I wasn’t able to get it running on a VPS because it requires some sort of GPU.

This is good to know. I know that you can get a VPS with a GPU, but they're usually rather pricey. I wonder if there's one where the GPU's are shared, and you only get billed by how much the GPU is used. So if there is an image upload, the GPU would kick on to check it, you get billed for that GPU time, then it turns off and waits for the next image upload.

[–] pe1uca@lemmy.pe1uca.dev 5 points 1 week ago (5 children)

I don't think there are services like that, since usually this means deploying and destructing an instance, which takes a few minutes (if you just turn off the instance you still get billed).
Probably the best option would be to have a snapshot, which costs way less than the actual instance, and create from it each day or so yo run on the images since it was last destroyed.

This is kind of what I do with my media collection, I process it on my main machine with a GPU, and then just serve it from a low-power one with Jellyfin.

load more comments (5 replies)
load more comments (14 replies)
[–] db0@lemmy.dbzer0.com 3 points 1 week ago

https://github.com/db0/fedi-safety and the companion app https://github.com/db0/pictrs-safety which can be installed as part of your lemmy deployment in the docker-compose (or with a var in your ansible)

[–] Kalcifer@sh.itjust.works 6 points 1 week ago

If your instance freely hosts whatever without any oversight, word will spread and all of a sudden you’re hosting all sorts of bad stuff. It’s not technically illegal if you don’t know about it, but I personally don’t want anything to do with that.

Yeah, this is my primary concern. I'm hoping that there are established best practices for handling the majority of this sort of unwanted content.

[–] Kalcifer@sh.itjust.works 4 points 1 week ago (2 children)

If you have open registrations you’ll be doing a lot of moderation for spam.

Perhaps Captchas are sufficient?

[–] walden@sub.wetshaving.social 4 points 1 week ago

I just checked and we have that turned on, too.

We don't get a lot of applications. A couple per week, maybe.

[–] Dave@lemmy.nz 4 points 1 week ago* (last edited 1 week ago) (1 children)

The spam is not from bots, it's people being paid to spam. Captchas absolutely need to be turned on or else you get bots as well, but they don't stop the spam.

[–] Kalcifer@sh.itjust.works 4 points 1 week ago (1 children)

The spam is not from bots, it's people being paid to spam.

Do you know any specific/official organizations that do this, and/or examples where it's occured on Lemmy?

[–] Dave@lemmy.nz 3 points 1 week ago* (last edited 1 week ago) (1 children)

Its pretty random outside the Russian misinformation sites (which I haven't seen in a while, but they probably got better at hiding).

Its hard to give you a link because mods or admins remove the posts or ban the accounts pretty quick most of the time. But there is a new spam account at least every day (I can think of at least two today. Edit: 4). They come in waves so sometimes there are a whole bunch.

That's probably another thing you need to know. I'm on Lemmy.nz, you're on sh.it.works. If some new spam account signs up on Lemmy.world and posts to lemm.ee, then if it's removed by an admin on your instance it is only removed for people on your instance. Everyone else still sees it as your instance is not hosting either the community or the user so it can't federate our anything to deal with it. The lemm.ee instance could remove the post or comment with the spam in a way that federates out to other instances, but can't ban the user except for on their instance. Only the Lemmy.world instance can ban the user in a way that federates out to other instances. This is something you'll get a better understanding of over time.

Lemmy.world has a lot if help so they don't have issues, but often the spam will come from obscure instances while the admin is asleep and there is no backup, so every other instance has to remove the spam for their own instance. Then you have to work out how to mitigate that for your own instance when you are asleep. Most admins are pretty understanding that this is a hobby and don't expect everyone to be immediately available, but if you have open registrations then you are likely to be targeted more and need a better plan.

[–] Kalcifer@sh.itjust.works 2 points 1 week ago* (last edited 1 week ago) (1 children)

If some new spam account signs up on Lemmy.world and posts to lemm.ee, then if it's removed by an admin on your instance it is only removed for people on your instance. Everyone else still sees it as your instance is not hosting either the community or the user so it can't federate our anything to deal with it. The lemm.ee instance could remove the post or comment with the spam in a way that federates out to other instances, but can't ban the user except for on their instance. Only the Lemmy.world instance can ban the user in a way that federates out to other instances.

This make me think that we should maintain a community curated blocklist in, for example, a Git repository. It could be a list of usernames, and/or a list of instances that are known to be spam that gets updated as new accounts and instances are discovered. Then any instance owner can simply pull the most current version of the blocklist (this could even be done automatically). Once the originating instance blocks the malicious account, they can be removed from the list. This also gives those who have been blocked a centralized method to appeal the block (eg open an issue to create an appeal).

I would honestly have expected something like this to already exist. I think it's partly the purpose of Fediseer, but I'm not completely sure.

[–] Dave@lemmy.nz 2 points 1 week ago (9 children)

This make me think that we should maintain a community curated blocklist in, for example, a Git repository.

There would be a few problems I can think of with this approach. The first one is who controls it? Whoever that is, you haven't solved the issue because now instead of only the instance with the user being able to federate the ban now only the maintainer of the git repo can update the ban list.

If you have many people able to update the repo, then the issue becomes a question of how do you trust all these people to never, ever, ever get it wrong? If you ban a user and opt to remove all their content (which you should, with spam), then if you are automating this you end up with the issue of if anyone screws up then how do you get someone's account unbanned on all those instances? How do you get all their content restored, which is a separate thing and Lemmy currently provides no good way to do this. How do you ensure there are no malicious people with control of the repo but also have enough instances involved to make it worthwhile?

There is a chat room where instance admins share details of spam accounts, and it's about the best we have for Lemmy at the moment (it works quite well, really, because everyone can be instantly notified but also make their own decisions about who to ban or if something is spam or allowed on their instance - because it's pretty common that things are not black and white).

I would honestly have expected something like this to already exist. I think it’s partly the purpose of Fediseer, but I’m not completely sure.

Fediseer has a similar purpose but it's a little different. So far we have been talking about spam accounts set up on various instances, and the time it takes for those mods and admins to remove the spam. But what happens if instead of someone setting up a spam account on an existing instance, they instead create their own instance purely for spamming other instances?

Fediseer provides a web of trust. An instance receives a guarantee from another instance. That instance then guarantees another instance. It creates a web of trust starting from some known good instances. Then if you wish you can choose to have your lemmy instance only federate with instances that have been guaranteed by another instance. Spam instances can't guarantee each other, because they need an instance that is already part of the web to guarantee them, and instances won't do that because they risk their own place in the web if they falsely guarantee another instances (say, if one instance keeps guaranteeing new instances that turn out to be spam, they will quickly lose their own guarantee).

Fediseer actually goes further than this, allowing instances to endorse or censure other instances and you can set up your instance to only federate with instances that haven't been censured or defederate from instances that others have censured for specific reasons (e.g. "hate speech", "racism", etc).

It's quite a cool tool but doesn't help the original discussion issue of spam accounts being set up on legitimate instances.

[–] Kalcifer@sh.itjust.works 2 points 1 week ago (1 children)

Fediseer provides a web of trust. An instance receives a guarantee from another instance. That instance then guarantees another instance. It creates a web of trust starting from some known good instances. Then if you wish you can choose to have your lemmy instance only federate with instances that have been guaranteed by another instance. Spam instances can’t guarantee each other, because they need an instance that is already part of the web to guarantee them, and instances won’t do that because they risk their own place in the web if they falsely guarantee another instances (say, if one instance keeps guaranteeing new instances that turn out to be spam, they will quickly lose their own guarantee).

How would one get a new instance approved by Fediseer?

[–] Dave@lemmy.nz 2 points 1 week ago* (last edited 1 week ago)

First, don't stress over it. Most instances are not strict on only federating with guaranteed instances. Most do not auto-sync with Fediseer at all, and the ones that do are more likely to only be syncing censures (when other instances are reporting the instance as problematic).

To get guaranteed on Fediseer, you need another instance to guarantee you. If you start your instance, hang out in the spam defense chat, and are generally sensible with your instance, then you'll find someone willing to do it no problem. Guarantees are not a huge risk to an instance since they can also be revoked at any time. If someone guarantees you then you start being a dick, they can just remove your guarantee. So it's not a big decision, people wil be happy to guarantee someone who seems reasonable.

[–] Kalcifer@sh.itjust.works 2 points 1 week ago

There is a chat room where instance admins share details of spam accounts, and it’s about the best we have for Lemmy at the moment (it works quite well, really, because everyone can be instantly notified but also make their own decisions about who to ban or if something is spam or allowed on their instance - because it’s pretty common that things are not black and white).

Yeah I think I'm more on the side of this, now. The chat is a decent, and workable solution. It's definitely a lot more hands-on/manual, but I think it's a solid middle ground solution, for the time being.

[–] Kalcifer@sh.itjust.works 2 points 1 week ago

how do you trust all these people to never, ever, ever get it wrong?

The naively simple idea was that the banned user could open an appeal to get their name removed from the blocklist. Also, keep in mind that the community's trust in the blocklist is predicated on the blocklist being accurate.

load more comments (6 replies)
[–] Kalcifer@sh.itjust.works 2 points 1 week ago* (last edited 1 week ago) (2 children)

We require applications

Is this functionality built into the Lemmy software?


Addendum (2024-11-11T00:32Z):

Ah, yeah, it looks like it is configurable in the admin panel [1].

References

  1. Lemmy Documentation. join-lemmy.org. Accessed: 2024-11-11T00:35Z. https://join-lemmy.org/docs/users/01-getting-started.html#registration.
    • "2. Getting Started". §"Registration".

      Question/Answer: Instance admins can set an arbitrary question which needs to be answered in order to create an account. This is often used to prevent spam bots from signing up. After submitting the form, you will need to wait for some time until the answer is approved manually before you can login.

[–] walden@sub.wetshaving.social 4 points 1 week ago

Yeah, it's just something like "Tell us why you want to join this instance". If the answer is "to promote my content" or "qq", for example, they don't get approved.

It's done by the Lemmy software.

load more comments (1 replies)
load more comments (2 replies)
[–] finitebanjo@lemmy.world 6 points 1 week ago (3 children)

How much server hosting experience do you have? I asked about database preferences over in Self-Hosting once and they basically all said "don't choose a database ever. Run. Save yourself while there is still time!"

So maybe use a hosting service I guess. Makes you a more difficult target for attacks but also involves your information getting out into the world in direct connection to your instance.

[–] Kalcifer@sh.itjust.works 3 points 1 week ago

I asked about database preferences over in Self-Hosting once and they basically all said "don't choose a database ever.

I'm not sure I follow what you mean; Lemmy uses PostgreSQL.

[–] Kalcifer@sh.itjust.works 2 points 1 week ago (14 children)

[Using a hosting service] makes you a more difficult target for attacks but also involves your information getting out into the world in direct connection to your instance.

I'm not sure I understand how one's data would be leaked by the hoster.

load more comments (14 replies)
[–] Kalcifer@sh.itjust.works 2 points 1 week ago (1 children)

How much server hosting experience do you have?

I've never hosted a public facing social media service. I have a few years experience hosting a number of my own personal services, but they aren't at the scale of a public facing Lemmy instance.

[–] finitebanjo@lemmy.world 2 points 1 week ago* (last edited 1 week ago) (1 children)

You should be good as long as you know PostgreSQL

[–] Kalcifer@sh.itjust.works 3 points 1 week ago

Aha, well, it depends on what you mean by "know".

[–] just_another_person@lemmy.world 2 points 1 week ago (5 children)

It could be costly in a few places, so choose your host wisely:

  • data ingress/egress
  • storage (block and DB)
  • load balancing (if you choose to go that route)

I know that R2 has no charge for ingress/egress.

The block and db costs are technically unbounded, and will never decrease by default.

load more comments (5 replies)
load more comments
view more: next ›