mlaga97

joined 11 months ago
[–] mlaga97@lemmy.mlaga97.space 3 points 5 months ago

I ran RAID-Z2 across 4x14TB and a (4+8)TB LVM LV for close to a year before finally swapping the (4+8)TB LV for a 5th 14TB drive for via zpool replace without issue. I did, however, make sure to use RAID-Z2 rather than Z1 to account for said shenanigans out of an abundance of caution and I would highly recommend doing the same. That is to say, the extra 2x2TB would be good additional parity, but I would only consider it as additional parity, not the only parity.

Based on fairly unscientific testing from before and after, it did not appear to meaningfully affect performance.

[–] mlaga97@lemmy.mlaga97.space 7 points 5 months ago

125W (Less than $15/month) or so for

  • Ryzen 9 3900X
  • 64GB RAM
  • 2x4TB NVMe (ZFS Mirror)
  • 5x14TB HDD (ZFS RAID-Z2)
  • 2.5GBe Network Card
  • 5-port 2.5GBe Network Switch
  • 5-port 1GBe POE Network Switch w/ one Reolink Camera attached

I generally leave powerManagement.cpuFreqGovernor = "powersave" in my Nix config as well, which saves about 40W ($4/mo or so) for my typical load as best as I can tell, and I disable it if I'm doing bulk data processing on a time crunch.

[–] mlaga97@lemmy.mlaga97.space 25 points 5 months ago (2 children)

Realistically, the target audience are organizations as nowadays most business laptops are being carried between docking stations with the occasional meeting or air travel in-between and 13" is an excellent size to meet those needs.

When hooked to a docking station, the screen size and keyboard is entirely irrelevant and modern laptop performance is...honestly crazy good.

When in a meeting, it's probably being either used to take notes fullscreen or show a presentation, so pretty neutral.

Finally, when traveling, you can really can feel the difference between a 13" and a 15" when you're running on too short of a layover between flights.

[–] mlaga97@lemmy.mlaga97.space 4 points 7 months ago

My partner and I use a git repository on our self-hosted gitea instance for household management.

Issue tracker and kanban boards for task management, wiki for documentation, and some infrastructure components are version controlled in the repo itself. You could almost certainly get away with just the issue tracker.

Home Assistant (also self-hosted) provides the ability to easily and automatically create issues based on schedules and sensor data, like creating a git issue when when weather conditions tomorrow may necessitate checking this afternoon that nothing gets left out in the rain.

Matrix (also self-hosted) lets Gitea and Home Assistant bully us into remembering to do things we might have forgotten. (Send a second notification if the washer finished 15 minutes ago, but the dryer never started)

It’s been fantastic being able to create git issues for honey-dos as well as having the automations for creating issues for recurring tasks. “Hey we need to take X to the vet for Y sometime next week” “Oh yeah, can you go ahead and put in a ticket?” And vice versa.

[–] mlaga97@lemmy.mlaga97.space 1 points 7 months ago* (last edited 7 months ago)

what does industry do when they need to automate provisioning of thousands of devices for POS, retail, barcode scanning, delivery drivers, etc.

MDM doesn't help with the kind of stuff OP is trying to automate, but it does usually cover most business use cases and if you need more than that, you generally either have a contract to get the manufacturer to do it for you or just put what you need into the org-specific superapp you already have to have.

[–] mlaga97@lemmy.mlaga97.space 28 points 7 months ago (2 children)

Oh nice a nicely-formatted list of reasons I don't switch phones more frequently than once every 5 years: I loathe setting them up as specifically as I want them to behave

[–] mlaga97@lemmy.mlaga97.space 3 points 8 months ago

I've read many many discussions about why manufacturers would list such a pessimistic number on their datasheets over the years and haven't really come any closer to understanding why it would be listed that way, when you can trivially prove how pessimistic it is by repeatedly running badblocks on a dozen of large (20TB+) enterprise drives that will nearly all dutifully accept hundreds of TBs written to and read from with no issues when the URE rate suggests that would result in a dozen UREs on average.

I conjecture, without any specific evidence, that it might be an accurate value with respect to some inherent physical property of the platters themselves that manufactures can and do measure that hasn't improved considerably, but has long been abstracted away by increaed redundancy and error correction at the sector level that result in much more reliable effective performance, but the raw quantity is still used for some internal historical/comparative reason rather than being replaced by the effective value that matters more directly to users.

[–] mlaga97@lemmy.mlaga97.space 2 points 8 months ago (2 children)

If the actual error rate were anywhere near that high, modern enterprise hard drives wouldn't be usable as a storage medium at all.

A 65% filled array of 10x20TB drives would average at least 1 bit failure on every single scrub (which is full read of all data present in the array), but that doesn't actually happen with any real degree of regularity.

[–] mlaga97@lemmy.mlaga97.space 11 points 8 months ago (4 children)

I think it's worth pointing out that this article is 11 years old, so that 1TB rule-of-thumb probably probably needs to be adjusted for modern disks.

If you have 2 full backups (18TB drives being more than sufficient) of the array, especially if one of those is offsite, then I'd say you're really not at a high enough risk of losing data during a rebuild to justify proactively rebuilding the array until you have at least 2 or more disks to add.

[–] mlaga97@lemmy.mlaga97.space 2 points 8 months ago

Still a few Ubuntu Server stragglers here and there, but it works quite well as long as you keep your base config fairly lean and push the complexity into the containers.

Documentation tends to be either good or nonexistent depending on what you're doing, so for anything beyond standard configuration but it can usually be pieced together from ArchWiki and the systemd docs.

All in all, powerful and repeatable (and a lot less tedious than Ansible, etc), but perhaps not super beginner-friendly once you start getting into the weeds. Ubuntu Server is just better documented and supported if you need something super quick and easy.

[–] mlaga97@lemmy.mlaga97.space 3 points 8 months ago* (last edited 8 months ago)

NextCloud main use is file synchronization

Is it? Interesting. I don't think I've ever even considered using it for that purpose.

I mostly use it as an easily web-accessible interface for a variety of unified productivity and organization software (file upload/download, office suite, notes, calendar, etc), with easy ability to do stuff like create a password-protected shared folders of pictures/documents I can easily share with friends and family who don't have accounts so they can upload/download/organize/edit files with me and each other from a browser without having to install additional software on client devices.

[–] mlaga97@lemmy.mlaga97.space 9 points 8 months ago* (last edited 8 months ago) (2 children)

Which I'm not sure I get the popular mentioning of since it seems to serve a very different purpose than NextCloud does, like not even similar niches.

Nothing against it, of course, it just doesn't feel like an 'alternative' to NC.

view more: ‹ prev next ›