this post was submitted on 03 Oct 2024
47 points (96.1% liked)

Selfhosted

40347 readers
329 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 1 year ago
MODERATORS
47
Anyone running ZFS? (lemmy.fwgx.uk)
submitted 1 month ago* (last edited 1 month ago) by blackstrat@lemmy.fwgx.uk to c/selfhosted@lemmy.world
 

At the moment I have my NAS setup as a Proxmox VM with a hardware RAID card handling 6 2TB disks. My VMs are running on NVMEs with the NAS VM handling the data storage with the RAIDed volume passed through to the VM direct in Proxmox. I am running it as a large ext4 partition. Mostly photos, personal docs and a few films. Only I really use it. My desktop and laptop mount it over NFS. I have restic backups running weekly to two external HDDs. It all works pretty well and has for years.

I am now getting ZFS curious. I know I'll need to IT flash the HBA, or get another. I'm guessing it's best to create the zpool in Proxmox and pass that through to the NAS VM? Or would it be better to pass the individual disks through to the VM and manage the zpool from there?

you are viewing a single comment's thread
view the rest of the comments
[–] avidamoeba@lemmy.ca 4 points 1 month ago* (last edited 1 month ago) (1 children)

And you probably know that sync writes will shred NAND while async writes are not that bad.

This doesn't make sense. SSD controllers have been able to handle any write amplification under any load since SandForce 2.

Also most of the argument around speed doesn't make sense other than DC-grade SSDs being expected to be faster in sustained random loads. But we know how fast consumer SSDs are. We know their sequential and random performance, including sustained performance - under constant load. There are plenty benchmarks out there for most popular models. They'll be as fast as those benchmarks on average. If that's enough for the person's use case, it's enough. And they'll handle as many TB of writes as advertised and the amount of writes can be monitored through SMART.

And why would ZFS be any different than any other similar FS/storage system in regards to random writes? I'm not aware of ZFS generating more IO than needed. If that were the case, it would manifest in lower performance compared to other similar systems. When in fact ZFS is often faster. I think SSD performance characteristics are independent from ZFS.

Also OP is talking about HDDs, so not even sure where the ZFS on SSDs discussion is coming from.

[–] minnix@lemux.minnix.dev -1 points 1 month ago (1 children)

There is no way to get acceptable IOPS out of HDDs within Proxmox. Your IO delay will be insane. You could at best stripe a ton of HDDs but even then one enterprise grade SSD will smoke it as far as performance goes. Post screenshots of your current Proxmox HDD/SSD disk setup with your ZFS pool, services, and IO delay and then we can talk. The difference that enterprise gives you is night and day.

[–] blackstrat@lemmy.fwgx.uk 1 points 1 month ago (1 children)

Are you saying SSDs are faster than HDDs?

[–] minnix@lemux.minnix.dev 0 points 1 month ago

I was asking them to post their setup so I can evaluate their experience with regards to Proxmox and disk usage.