sj_zero

joined 2 years ago
[–] sj_zero@lotide.fbxl.net 1 points 3 weeks ago

Well, let me tell you a story.

Recently I needed to use BitTorrent to download a very large file from an independent project. Usually I can just use my web browser, but this one was in the hundreds of gigabytes there just was no way.

So I installed the original official bittorrent client, because I'm really out of the game I haven't torn today anything outside of my browser in years now.

I had to pay close attention to not install multiple pieces of unwanted software. I had to uncheck a bunch of stuff and carefully navigate the installer. Even after that, the client was junk and constantly showed multiple videos ads at all times, and besides that it just didn't have the horsepower to download my torrent for me.

I remembered using transmission on Linux so I decided to try getting that instead, turns out it had a Windows version.

Downloaded, ran the executable, pressed next three times, opened up the torrent file, pointed to my existing download hoping it'd figure out what parts the file needed and in fact it did and the download was done quickly.

If I had failed to uncheck any of the boxes, I guess you could call me stupid for non-un checking them, but to me it seems a lot simpler using the FOSS products that never had any checkboxes to uncheck in the first place.

Meanwhile, and honestly I didn't use Plex very much because it just didn't seem like a very good product, but I also seem to remember I kept on ending up on the plex.net website instead of my own server. I think it was something along lines of if you go in to change certain settings it'll change domains on you? Either way, it was just not very well set up compared to Jellyfin, which had everything that I was using right there I never even remotely tried to send me somewhere else.

[–] sj_zero@lotide.fbxl.net 1 points 3 weeks ago

Zero with jellyfin.

[–] sj_zero@lotide.fbxl.net 2 points 3 weeks ago (5 children)

By default for me it seems to really want me to get off of my server altogether and get onto their servers, and it seems to really want to get me off of my media and onto their half-baked streaming service.

Really complex compared to just having my media show up.

[–] sj_zero@lotide.fbxl.net 5 points 3 weeks ago (1 children)

I'm using proxmox now with lots of lxc containers. Prior to that, I used bare metal.

VMs were never really an option for me because the overhead is too high for the low power machines I use -- my entire empire of dirt doesn't have any fans, it's all fanless PCs. More reliable, less noise, less energy, but less power to throw at things.

Stuff like docker I didn't like because it never really felt like I was in control of my own system. I was downloading a thing someone else made and it really wasn't intended for tinkering or anything. You aren't supposed to build from source in docker as far as I can tell.

The nice thing about proxmox's lxc implementation is I can hop in and change things or fix things as I desire. It's all very intuitive, and I can still separate things out and run them where I want to, and not have to worry about keeping 15 different services running on the same version of whatever common services are required.

[–] sj_zero@lotide.fbxl.net 5 points 3 weeks ago (9 children)

Honestly, I lowkey hated plex when I was using it. We never used it because it wasn't very good at the one thing it was supposed to be fore.

It was trying so hard to get me to use their media, when what I wanted was to watch my media. By contrast, jellyfin just shows me my media.

If you have a few bucks, the chromecast with android TV is what I'd recommend. The jellyfin app for android TV looks and works great -- as good as any paid streaming service imo. I got my wife using it daily, and she's not a tech person at all.

[–] sj_zero@lotide.fbxl.net 6 points 1 month ago

Since calendar is an app, but fundamental email service isn't, one thing that I found is that apps can interact in ways that are completely unintuitive.

For example, I activated the ncdownloader app, and it caused mail to stop showing emails, or I activated nextcloud music and it stopped nextcloud news from updating.

You should check your logs, because usually when there's a problem it will show up in there. The logs I'm referring to are in your administrator panel. It will be completely unintuitive as to what exactly is going on. The other thing that you can do is just pay attention to which apps you've installed, and if there are any that are a little bit unusual, just try to disabling them and seeing if calendar mail works after that.

[–] sj_zero@lotide.fbxl.net 13 points 1 month ago (1 children)

I personally used 7digital to rebuild my music collection. They sell good licensed mp3s.

I have absolutely nothing negative to say about them. The prices were decent, the files are boring DRM free MP3s, and they had a really good selection of music.

Honestly it looks almost exactly the same as when I used it for the first time like 15 years ago.

[–] sj_zero@lotide.fbxl.net 99 points 1 month ago (8 children)

IMO, AI is a really good demo for a lot of people, but once you start using it, the gains you can get from it end up being somewhat minimal without doing some serious work.

Reminds me of 10 other technologies that if you didn't get in the world was going to end but ended up more niche than you'd expect.

[–] sj_zero@lotide.fbxl.net 10 points 1 month ago (2 children)

I didn't like synapse or dendrite at all, but conduit has been great.

[–] sj_zero@lotide.fbxl.net 4 points 1 month ago

I moved to Proxmox a while back and it was a big upgrade for my setup.

I do not use VMs for most of my services. Instead, I run LXC containers. They are lighter and perfect for individual services. To set one up, you need to download a template for an operating system. You can do this right from the Proxmox web interface. Go to the storage that supports LXC templates and click the Download Templates button in the top right corner. Pick something like Debian or Ubuntu. Once the template is downloaded, you can create a new container using it.

The difference between VMs and LXC containers is important. A VM emulates an entire computer, including its own virtual hardware and kernel. This gives you full isolation and lets you run completely different operating systems such as Windows or BSD, but it comes with a heavier resource load. An LXC container just isolates a Linux environment while running on the host system’s kernel. This makes containers much faster and more efficient, but they can only run Linux. Each container can also have its own IP address and act like a separate machine on your network.

I tend to keep all my services in lxc containers, and I run one VM which I use for a jump box I can hop into if need be. It's a pain getting x11 working in a container, so the VM makes more sense.

Before you start creating containers, you will probably need to create a storage pool. I named mine AIDS because I am an edgelord, but you can use a sensible name like pool0 or data.

Make sure you check the Start at boot option for any container or VM you want to come online automatically after a reboot or power outage. If you forget this step, your services will stay offline until you manually start them.

Expanding your storage with an external SSD works well for smaller setups. Longer term, you may want to use a NAS with fast network access. That lets you store your drive images centrally and, if you ever run multiple Proxmox servers, configure hot standby so one server can take over if another fails.

I do not use hot standby myself. My approach is to keep files stored locally, then back them up to my NAS. The NAS in turn performs routine backups to an external drive. This gives me three copies of all my important files, which is a solid backup strategy.

[–] sj_zero@lotide.fbxl.net 4 points 1 month ago

I set up everything I use "bare metal" or at least in an lxc container I directly build and maintain, but most people don't. Makes a lot of sense, to be honest. A lot of prepackaged software uses databases and nobody has to care exactly what it's up to.

[–] sj_zero@lotide.fbxl.net 1 points 1 month ago

I was about to do the same, had the USB stick prepared and everything, then I tried proxmox. Just lucked out.

 

Link aggregators have a problem on the fediverse. The approach is server-centric, which has positives, but it also has major negatives.

The server-centric approach is where a community belongs to a certain server and everything in the world revolves around that server.

The problem is that it's a centralized formula that centralizes power in a the hands of a whichever servers attract the most users, and potentially breaks up what might be a broader community, and makes for a central point of failure.

Right now, if user1@a.com and user2@b.com talk on community1@c.com then a lot of things can happen to break that communication. if c.com defederates b.com then the communication will not happen. If c.com breaks then the communication will not happen. If c.com shuts down then the communication will not happen. If c.com's instance gets taken over by management that doesn't want person1 and person2 to talk, then the communication will not happen.

Another problem is that user1@a.com and user2@b.com might never meet, because they might be on community1@a.com and community1@c.com. This means that a community that could reach critical mass to be a common meeting place would not because it's split into a bunch of smaller communities.

Mastodon has servers going up and down all the time, and part of the reason it's able to continue functioning as a decentralized network is that as long as you're following people on a wide variety of servers then one server going down will stop some users from talking but not all of them so the system can continue to operate as a whole. By contrast, I'm posting this to one server, and it may be seen by people on a wide variety of servers, but if the one server I'm posting this to goes down the community is destroyed.

There are a few ways to solve the problem...

one method could work as something like a specific "federated network community". There would be a local community, and the local community would federate (via local mods, I presume) with communities on other instances creating a specific metacommunity of communities on many instances that could federate with other activitypub enabled communities, and if any of the federated communities go down the local community remains. If any servers posed problems they could cease being followed, and in the worst case a community could defederate totally from a server (at a community level rather than a server level) In that case, community1@a.com and community1@b.com could be automatically linked up once both connect to community1@c.com (I'm thinking automatic linking could be a feature mods could turn off and on for highly curated communities), and if c.com shuts down or defederates with one of the two, user1@a.com and user2@b.com would continue to be able to talk through their federated network.

Another method would be something more like hashtags for root stories, but I don't know how server-server links would be accomplished under a platform like lemmy, kbin, or lotide. I don't know how hashtags migrate on mastodon type software and how that migrates. In that case, it might be something like peertube where a network is established by admins (or users, I don't know) connecting to other servers manually.

Finally, I think you could implement the metacommunity without changing the entire fediverse by having the software auto-aggregate metacommunities. You could create a metacommunity community1 on a.com that would then automatically aggregate all posts on communities called community1 on all known servers. The potential downside of this is you could end up with a lot of noise with 100 posts of the same story, I haven't thought much about how you could handle duplicates so you could participate but wouldn't have 100 similar posts. In this case with respect to how to handle new posts, each metacommunity would be a local community and new individual posts would be posted locally and federated to users on other metacommunities. If metacommunities of this sort became the norm, then the duplicates problem may be solved organically because individuals using metacommunities would see the posts on other metacommunities and wouldn't bother reposting the same story, much like how people see a story and don't repost in individual communities.

One big problem is scaling, doing something like this would definitely be a non-trivial in terms of load per community. Right now if one person signs up to one community, they get a lot of posts from one server. Under a metacommunity idea like this, if one person signs up to one community, they get a lot of posts from many, many servers. lemmy.world has 5967 total instances connected to it, and 2155 instances running lemmy, lotide, kbin, mbin, or friendica that could contain similar types of community, that's a lot of communities to follow for the equivalent of one single community, especially if some of the communities in the metacommunity have a lot of traffic in that community. You'd have to look at every known server to first see if it exists and second if it has a community appropriate for the metacommunity, and the metacommunity would have to routinely scan for dead hosts to remove from the metacommunity and live hosts that may start to see an appropriate metacommunity has been created.

I'm sure there are other solutions, but I'm just thinking of how things work within my current understanding.

Of course, for some people, the problem is one they don't want solved because it isn't a problem in their view (and that's a legit view even if it's one I'm not really amenable to). Some people prefer smaller communities, or want tighter control over their communities. For servers or communities that don't want to be brought into a metacommunity, it seems like some sort of flag to opt-out (or opt-in as the case may be) should be designed in -- I'm thinking something in the community description like a textflag NOMC or YESMC that server software would be designed to respect.

With respect to moderation, It seems to me that you could have a variety of strategies -- you could have a sort of default accept all moderation where if one instance moderates a post other instances take on the same action, or whitelist moderation where if one instance or one set of moderators on a whitelist take an action then other instances take the same action, or a sort of republican moderation where if a certain number of instances take an action then other instances take the same action, and probably an option for individual metacommunities to only accept moderation from the local community the original post came from. I suspect you'd want a choice in the matter per metacommunity instance on a server.

 

Anyone who knows me knows that I've been using next cloud forever, and I fully endorse anyone doing any level of self hosting should have their own. It's just a self-hosted Swiss army knife, and I personally find it even easier to use than something like SharePoint.

I had a recurring issue where my logs would show "MYSQL server has gone away". It generally wasn't doing anything, but occasionally would cause large large file uploads to fail or other random failures that would stop quickly after.

The only thing I did is I went in and doubled wait_timeout in my /etc/mysql/mariadb.conf.d/50-server.cnf

After that, my larger file uploads went through properly.

It might not be the best solution but it did work so I figured I'd share.

 

So both lemmy and lotide were having big problems where they'd get totally overwhelmed, especially once I started federating with huge instances. At first I thought it was because my servers aren't very powerful, but eventually I got the idea that maybe it's because it can't keep up with federation data from the big instances.

So I decided to limit the connections per IP address. Long-term testing isn't done yet, but so far both my lemmy and lotide instances aren't getting crushed when they're exposed to the outside world, so I think it's helping.

In /etc/nginx/nginx.conf, under the http section, I added the line "limit_conn_zone $binary_remote_addr zone=conn_limit_per_ip:10m;"

Then, in my sites-available folder for the services, I added "limit_conn conn_limit_per_ip 4;" or something similar. Both lemmy and lotide have different sections for ActivityPub and API, so it appears I can limit the connections just to those parts of the site.

It's only been a few days, but whereas before both instances would die randomly pretty quickly once exposed to the outside world, now it appears that they're both stable. Meanwhile, I'm still getting federated posts and comments.

view more: next ›