this post was submitted on 03 Apr 2025
134 points (97.2% liked)
Fediverse
37390 readers
21 users here now
A community to talk about the Fediverse and all it's related services using ActivityPub (Mastodon, Lemmy, KBin, etc).
If you wanted to get help with moderating your own community then head over to !moderators@lemmy.world!
Rules
- Posts must be on topic.
- Be respectful of others.
- Cite the sources used for graphs and other statistics.
- Follow the general Lemmy.world rules.
Learn more at these websites: Join The Fediverse Wiki, Fediverse.info, Wikipedia Page, The Federation Info (Stats), FediDB (Stats), Sub Rehab (Reddit Migration)
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
We have data on what it costs to run a sizeable instance of Lemmy and it's not a lot. How does Piefed compare? Anyone starting an instance who envisions it growing large has to contend with this question. Currently it seems it's got a bit under 1000 users across under 10 servers.
There are now sizeable communities run on Lemmy instances that are reinforced by network effects. There needs to be a significant reason for them to migrate. To that point, the collective project is building communities away from corporate power, not software. The software is a tool to facilitate that. Lemmy has worked well so far in this regard. If someone can show that Piefed can work better and not cost significantly more, it'll probably get adopted for new communities. If the difference is drastic, we may even see migrations from Lemmy.
I second this. Lemmy is written in Rust where as piefed is written in Python. When it comes to running a high-performance webserver, Lemmy has the advantage.
While theoretically true, the main bottleneck with Lemmy seems to be the database performance, so with both projects depending on PostgreSQL for that, I somewhat doubt that Piefed being written in Python will have much noticeable effect in reality.
Postgres is so quick if you know how to use it...
You don't even need to know how to use it very well, in my experience.
Really depends on many factors. If you have everything in RAM, almost nothing matters.
If your dataset outgrows the capacity, various things start to matter, based on your workload. Random reads need to have good indices (also writes with unique columns), OLAPs benefit from work_mem, >100M rows will need good partitioning, OLTP may even need some custom solutions if you need to keep a long history, but not for every transaction.
But even with >B of rows, Postgres can handle it with relative ease, if you know what you're doing. Usually even on a hardware you would consider absolutely inadequate (last year I migrated our company DB from MySQL to Postgres, and with even more data and more complex workflows we downsized our RAM by more than half).