sj_zero

joined 2 years ago
[–] sj_zero@lotide.fbxl.net 1 points 2 days ago

Why my firewall is a fanless sign PC. Never really heats up, and I don't need to worry about the unreliability added by fans.

[–] sj_zero@lotide.fbxl.net 9 points 3 days ago (1 children)

To be fair, and this is coming to someone who is fully sold on LibreOffice and hosts Collabera, the two word processors can open each other's documents, but cannot produce identical outputs for the same files.

For 99.99% of things switching between the two is going to be just fine, but every once in awhile that 0.01% will really bite you, especially if it is something important such as equations which I have seen first hand don't properly migrate to LibreOffice.

[–] sj_zero@lotide.fbxl.net 1 points 4 days ago

Depends exactly what you're doing on that old PC.

If you just need to connect for administration and the like, VNC is decent. It's my default.

If you want to watch videos or the like, I'd definitely suggest Sunlight and Moonlight. It's a streaming remote desktop that's meant for streaming gaming, and so it's really good at video and audio.

[–] sj_zero@lotide.fbxl.net 4 points 5 days ago

Oh, and for anyone who has never used it, Apache guacamole is a really neat tool for centralizing configuration. Effectively, you can set it up as a website with a username and password that will transfer through ssh, telnet, VNC, and RDP, so if you need to hop into something while you are outside the home, it's going to be effective. That's something that I wish I had known about earlier, it would have made a lot of rough days a lot easier.

[–] sj_zero@lotide.fbxl.net 2 points 5 days ago (2 children)

On the topic of dns, I still use GoDaddy. People ask why, it's because GoDaddy seems like a good idea in 2003 when I got my first domain, and 2006 when I got my current one. At that point it's just inertia, I tend to buy several years in advance because I don't like annual payments, I know it makes me a weirdo. That means I'm locked in for several years and it's not enough of a problem to do anything about.

Anyone who uses GoDaddy knows that they turned off their dynamic DNS option quite some time ago. My system is pretty stable so I don't usually need to change it, but if I have a power failure at home or I need to reboot my router, I obviously need to change my DNS at those moments.

When I'm away from home, I end up having to use TeamViewer to hop into a jump box vm I have set up for that purpose. The two obvious problems with that are first of that TeamViewer is a proprietary product, and the second of all that they see me hopping into a jump box regularly and they assume that I'm a commercial customer. There is apparently a way to tell them that you're just a hobbyist, but I haven't gotten around to filing that.

What I did do is set up a script that compares the current IP to my DNS IP, and if they are different then I send myself an email that contains the old IP in the new IP. This way, I don't need to hop into my network to find out what the new IP address is. I also added a little bit there to save the last successful IP address sent by email to /tmp/ so that if I lose my IP address but I'm doing something where I can't hop onto the GoDaddy website to fix it, I don't get 100,000 emails with my new IP address.

I killed my house power a couple weeks ago, and the whole system worked exactly as intended. I was pretty happy to see that.

[–] sj_zero@lotide.fbxl.net -1 points 1 week ago

Erm... Yeah, that's matrix with encryption enabled on the room.

[–] sj_zero@lotide.fbxl.net 2 points 1 week ago (1 children)

A lot of things people say about matrix don't apply to conduit. I've run it on an Intel Atom d2550 and it ran fine.

[–] sj_zero@lotide.fbxl.net 1 points 2 weeks ago

I'd be worried about having some of the voat stuff on a hard drive I own.

I'm surprised GitHub hasn't automatically nixed the archive.

[–] sj_zero@lotide.fbxl.net 6 points 3 weeks ago

Always has been.

Even if you like who's in charge right now, they could change how they act or they could be replaced.

They could shut us down or do a lot of things, but it's harder to break 10,000 servers than one.

[–] sj_zero@lotide.fbxl.net 4 points 1 month ago

I've been running my own, it's mostly automated now. I started a yacy instance as well so not only am I aggregating bigger websites, I'm including the sites I crawled myself and the other sites available on yacy through it's huge p2p search functionality. In this little way, I'm trying to make sure my search isn't totally dominated by corporate search.

Tbh, yacy is 1000x harder to keep running than searxng.

[–] sj_zero@lotide.fbxl.net 10 points 1 month ago (1 children)

I'm amenable to luanti with voxelibre.

Though I don't like the rebranding. Luanti sounds like a product you'd hear about in a drug commercial. "Side effects of luanti may include chronic flatulence, erectile dysfunction, and death"

[–] sj_zero@lotide.fbxl.net 0 points 1 month ago (1 children)

That'll be one hundred and fifty dollars, please.

 

I always love it when some massive piece of web infrastructure goes down but most websites I use are self-hosted so the only real effect is I see lots of news stories that cloudflare is down.

 

Link aggregators have a problem on the fediverse. The approach is server-centric, which has positives, but it also has major negatives.

The server-centric approach is where a community belongs to a certain server and everything in the world revolves around that server.

The problem is that it's a centralized formula that centralizes power in a the hands of a whichever servers attract the most users, and potentially breaks up what might be a broader community, and makes for a central point of failure.

Right now, if user1@a.com and user2@b.com talk on community1@c.com then a lot of things can happen to break that communication. if c.com defederates b.com then the communication will not happen. If c.com breaks then the communication will not happen. If c.com shuts down then the communication will not happen. If c.com's instance gets taken over by management that doesn't want person1 and person2 to talk, then the communication will not happen.

Another problem is that user1@a.com and user2@b.com might never meet, because they might be on community1@a.com and community1@c.com. This means that a community that could reach critical mass to be a common meeting place would not because it's split into a bunch of smaller communities.

Mastodon has servers going up and down all the time, and part of the reason it's able to continue functioning as a decentralized network is that as long as you're following people on a wide variety of servers then one server going down will stop some users from talking but not all of them so the system can continue to operate as a whole. By contrast, I'm posting this to one server, and it may be seen by people on a wide variety of servers, but if the one server I'm posting this to goes down the community is destroyed.

There are a few ways to solve the problem...

one method could work as something like a specific "federated network community". There would be a local community, and the local community would federate (via local mods, I presume) with communities on other instances creating a specific metacommunity of communities on many instances that could federate with other activitypub enabled communities, and if any of the federated communities go down the local community remains. If any servers posed problems they could cease being followed, and in the worst case a community could defederate totally from a server (at a community level rather than a server level) In that case, community1@a.com and community1@b.com could be automatically linked up once both connect to community1@c.com (I'm thinking automatic linking could be a feature mods could turn off and on for highly curated communities), and if c.com shuts down or defederates with one of the two, user1@a.com and user2@b.com would continue to be able to talk through their federated network.

Another method would be something more like hashtags for root stories, but I don't know how server-server links would be accomplished under a platform like lemmy, kbin, or lotide. I don't know how hashtags migrate on mastodon type software and how that migrates. In that case, it might be something like peertube where a network is established by admins (or users, I don't know) connecting to other servers manually.

Finally, I think you could implement the metacommunity without changing the entire fediverse by having the software auto-aggregate metacommunities. You could create a metacommunity community1 on a.com that would then automatically aggregate all posts on communities called community1 on all known servers. The potential downside of this is you could end up with a lot of noise with 100 posts of the same story, I haven't thought much about how you could handle duplicates so you could participate but wouldn't have 100 similar posts. In this case with respect to how to handle new posts, each metacommunity would be a local community and new individual posts would be posted locally and federated to users on other metacommunities. If metacommunities of this sort became the norm, then the duplicates problem may be solved organically because individuals using metacommunities would see the posts on other metacommunities and wouldn't bother reposting the same story, much like how people see a story and don't repost in individual communities.

One big problem is scaling, doing something like this would definitely be a non-trivial in terms of load per community. Right now if one person signs up to one community, they get a lot of posts from one server. Under a metacommunity idea like this, if one person signs up to one community, they get a lot of posts from many, many servers. lemmy.world has 5967 total instances connected to it, and 2155 instances running lemmy, lotide, kbin, mbin, or friendica that could contain similar types of community, that's a lot of communities to follow for the equivalent of one single community, especially if some of the communities in the metacommunity have a lot of traffic in that community. You'd have to look at every known server to first see if it exists and second if it has a community appropriate for the metacommunity, and the metacommunity would have to routinely scan for dead hosts to remove from the metacommunity and live hosts that may start to see an appropriate metacommunity has been created.

I'm sure there are other solutions, but I'm just thinking of how things work within my current understanding.

Of course, for some people, the problem is one they don't want solved because it isn't a problem in their view (and that's a legit view even if it's one I'm not really amenable to). Some people prefer smaller communities, or want tighter control over their communities. For servers or communities that don't want to be brought into a metacommunity, it seems like some sort of flag to opt-out (or opt-in as the case may be) should be designed in -- I'm thinking something in the community description like a textflag NOMC or YESMC that server software would be designed to respect.

With respect to moderation, It seems to me that you could have a variety of strategies -- you could have a sort of default accept all moderation where if one instance moderates a post other instances take on the same action, or whitelist moderation where if one instance or one set of moderators on a whitelist take an action then other instances take the same action, or a sort of republican moderation where if a certain number of instances take an action then other instances take the same action, and probably an option for individual metacommunities to only accept moderation from the local community the original post came from. I suspect you'd want a choice in the matter per metacommunity instance on a server.

 

Anyone who knows me knows that I've been using next cloud forever, and I fully endorse anyone doing any level of self hosting should have their own. It's just a self-hosted Swiss army knife, and I personally find it even easier to use than something like SharePoint.

I had a recurring issue where my logs would show "MYSQL server has gone away". It generally wasn't doing anything, but occasionally would cause large large file uploads to fail or other random failures that would stop quickly after.

The only thing I did is I went in and doubled wait_timeout in my /etc/mysql/mariadb.conf.d/50-server.cnf

After that, my larger file uploads went through properly.

It might not be the best solution but it did work so I figured I'd share.

 

So both lemmy and lotide were having big problems where they'd get totally overwhelmed, especially once I started federating with huge instances. At first I thought it was because my servers aren't very powerful, but eventually I got the idea that maybe it's because it can't keep up with federation data from the big instances.

So I decided to limit the connections per IP address. Long-term testing isn't done yet, but so far both my lemmy and lotide instances aren't getting crushed when they're exposed to the outside world, so I think it's helping.

In /etc/nginx/nginx.conf, under the http section, I added the line "limit_conn_zone $binary_remote_addr zone=conn_limit_per_ip:10m;"

Then, in my sites-available folder for the services, I added "limit_conn conn_limit_per_ip 4;" or something similar. Both lemmy and lotide have different sections for ActivityPub and API, so it appears I can limit the connections just to those parts of the site.

It's only been a few days, but whereas before both instances would die randomly pretty quickly once exposed to the outside world, now it appears that they're both stable. Meanwhile, I'm still getting federated posts and comments.

view more: next ›