Shdwdrgn

joined 1 year ago
[–] Shdwdrgn@mander.xyz 1 points 11 months ago

Was it EVER faster though? My experience with Windows has always been that they release new versions based on upcoming hardware specs and unless you spend top-dollar on the very latest hardware for their next release, you are going to see things moving slower on the new desktop. That's one of things I've enjoyed about linux, you can pretty much always upgrade the OS on an older machine without concern of taking a hit on the performance, and sometimes you even get a boost.

[–] Shdwdrgn@mander.xyz 11 points 11 months ago* (last edited 11 months ago) (1 children)

Wait, there's an option to host Overleaf locally? Is there any cost associated with this, or any restrictions on the number of users?

[Edit] Found some more info on this, there's a free community version, and then an enterprise version with a fee that lets you self-host but adds features like SSO and support from the company. I'll definitely have to look more into both of these options. Thanks, OP, for making me aware of this!

[–] Shdwdrgn@mander.xyz 1 points 11 months ago

It definitely comes down to the specific line. My last 6TB drives were my first jump into SAS drives but that series was terrible and I had a bunch of failures. I really should check google more often before jumping on what looks like a good deal.

[–] Shdwdrgn@mander.xyz 5 points 11 months ago (2 children)

The key concept here is how valuable your time is to rebuild your collection. I have a ~92TB (8x16 radiz2) array with about 33TB of downloaded data that has never been backed up as it migrated from my original cluster of 250GB drives through to today. I think part of the key is to have a spare drive on hand and ready to go when you do lose a drive, to be swapped in as soon as a problem shows up, plus having email alerts when a drive goes down so you're aware right away.

To add a little more perspective to my setup (and nightmare fuel for some people), I have always made my clusters from used drives, generally off ebay but the current batch comes from Amazon's refurbished shop. Plus these drives all sit externally with cables from SAS cards. The good news is this year I finally built a 3D-printed rack to organize the drives, matched to some cheap backplane cards, so I have less chance of power issues. And power is key here, my own experience has shown that if you use a cheap desktop power supply for external drives, you WILL lose data. I now run a redundant PS from a server that puts out a lot more power than I need, and I haven't lost anything since those original 250GB drives, nor have I had any concerns while rebuilding a failed drive or two. At one point during my last upgrade I had 27 HDDs spun up at once so I have a lot of confidence in this setup with the now-reduced drive count.

[–] Shdwdrgn@mander.xyz 3 points 11 months ago

Hey I wanted to say thanks for all the info and I've saved this aside. Had something come up that is requiring all my attention so I just got around to reading your message but it looks like my foray into docker will have to wait a bit longer.

[–] Shdwdrgn@mander.xyz 1 points 11 months ago (2 children)

One thing I'm not following in all the discussions about how self-contained docker is... nearly all of my images make use of NFS shares and common databases. For example, I have three separate smtp servers which need to put incoming emails into the proper home folders, but also database connections to track detected spam and other things. So how would all these processes talk to each other if they're all locked within their container?

The other thing I keep coming back to, again using my smtp servers as an example... It is highly unlikely that anyone else has exactly the same setup that I do, let alone that they've taken the time to build a docker image for it. So would I essentially have to rebuild the entire system from scratch, then learn how to create a docker script to launch it, just to get the service back online again?

[–] Shdwdrgn@mander.xyz 2 points 11 months ago (4 children)

Well congrats, you are the first person who has finally convinced me that it might actually be worth looking at even for my small setup. Nobody else has been able to even provide a convincing argument that docker might improve on my VM setup, and I've been asking about it for a few years now.

[–] Shdwdrgn@mander.xyz 1 points 11 months ago (7 children)

Yeah I can see the advantage if you're running a huge number of instances. In my case it's all pretty small scale. At work we only have a single server that runs a web site and database so my home setup puts that to shame, and even so I have a limited number of services I'm working with.

[–] Shdwdrgn@mander.xyz 0 points 11 months ago (9 children)

I'm not sure I understand this idea that VMs have a high overhead. I just checked one of my servers, there are nine VMs running everything from chat channels to email to web servers, and the server is 99.1% idle. And this is on a poweredge R620 with low-power CPUs, it's not like I'm running something crazy-fast or even all that new. Hell until the beginning of this year I was running all this stuff on poweredge 860's which are nearly 20 years old now.

If I needed to set up the VM again, well I would just copy the backup as a starting point, or copy one of the mirror servers. Copying a VM doesn't take much, I mean even my bigger storage systems only use an 8GB image. That takes, what, 30 seconds? And for building a new service image, I have a nearly stock install which has the basics like LDAP accounts and network shares set up. Otherwise once I get a service configured I just let Debian manage the security updates and do a full upgrade as needed. I've never had a reason to try replacing an individual library for anything, and each of my VMs run a single service (http, smtp, dns, etc) so even if I did try that there wouldn't be any chance of it interfering with anything else.

Honestly from what you're saying here, it just sounds like docker is made for people who previously ran everything directly under the main server installation and frequently had upgrades of one service breaking another service. I suppose docker works for those people, but the problems you are saying it solves are problems I have never run in to over the last two decades.

[–] Shdwdrgn@mander.xyz 2 points 11 months ago (11 children)

This is kinda where I'm at as well. I have always run my home services each in their own VM. There's no fuss to set up a new one, if I want to move it to a different server I just copy the *.img file over and launch it. Sure I run a lot of internet services across my various machines but it all just works so I don't understand what purpose there would be to converting all the custom configurations over to docker. It might make sense if I was trying to run all my services directly on the bare metal, but who does that?

[–] Shdwdrgn@mander.xyz 1 points 11 months ago (1 children)

I guess it just annoys me that they built a product on incredibly shady practices and have somehow managed to wedge themselves in to the business world under the guise of being "legitimate". Trusting anything on their site, to me, feels as risky as trusting anything you see on Yelp -- sure a real person might have posted the review, or maybe the business paid their blackmail tax to not get de-listed, but how many better opportunities are not being shown because the company deleted all their positive reviews?

[–] Shdwdrgn@mander.xyz 5 points 11 months ago (5 children)

Hell, why do this many people use LinkedIn? The whole platform was built off of scraping Windows user's address books without permission, sending unsolicited emails to all of those contacts using the name of that user, and pretending like they were such a great platform that of course your friends are inviting you to also join. And I'm pretty sure they still use this practice today because I continue to get emails from people who have no idea why their name is being attached to the spam I receive.

view more: ‹ prev next ›