this post was submitted on 03 Oct 2024
47 points (96.1% liked)

Selfhosted

40329 readers
401 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 1 year ago
MODERATORS
47
Anyone running ZFS? (lemmy.fwgx.uk)
submitted 1 month ago* (last edited 1 month ago) by blackstrat@lemmy.fwgx.uk to c/selfhosted@lemmy.world
 

At the moment I have my NAS setup as a Proxmox VM with a hardware RAID card handling 6 2TB disks. My VMs are running on NVMEs with the NAS VM handling the data storage with the RAIDed volume passed through to the VM direct in Proxmox. I am running it as a large ext4 partition. Mostly photos, personal docs and a few films. Only I really use it. My desktop and laptop mount it over NFS. I have restic backups running weekly to two external HDDs. It all works pretty well and has for years.

I am now getting ZFS curious. I know I'll need to IT flash the HBA, or get another. I'm guessing it's best to create the zpool in Proxmox and pass that through to the NAS VM? Or would it be better to pass the individual disks through to the VM and manage the zpool from there?

you are viewing a single comment's thread
view the rest of the comments
[–] minnix@lemux.minnix.dev -1 points 1 month ago (4 children)

ZFS is great, but to take advantage of it's positives you need the right drives, consumer drives get eaten alive as @scrubbles@poptalk.scrubbles.tech mentioned and your IO delay will be unbearable. I use Intel enterprise SSDs and have no issues.

[–] RaccoonBall@lemm.ee 8 points 1 month ago* (last edited 1 month ago) (1 children)

Complete nonsense. Enterprise drives are better for reliability if you plan on a ton of writes, but ZFS absolutely does not require them in any way.

Next you'll say it needs ECC RAM

[–] minnix@lemux.minnix.dev -1 points 1 month ago (1 children)
[–] avidamoeba@lemmy.ca 4 points 1 month ago* (last edited 1 month ago) (1 children)

And you probably know that sync writes will shred NAND while async writes are not that bad.

This doesn't make sense. SSD controllers have been able to handle any write amplification under any load since SandForce 2.

Also most of the argument around speed doesn't make sense other than DC-grade SSDs being expected to be faster in sustained random loads. But we know how fast consumer SSDs are. We know their sequential and random performance, including sustained performance - under constant load. There are plenty benchmarks out there for most popular models. They'll be as fast as those benchmarks on average. If that's enough for the person's use case, it's enough. And they'll handle as many TB of writes as advertised and the amount of writes can be monitored through SMART.

And why would ZFS be any different than any other similar FS/storage system in regards to random writes? I'm not aware of ZFS generating more IO than needed. If that were the case, it would manifest in lower performance compared to other similar systems. When in fact ZFS is often faster. I think SSD performance characteristics are independent from ZFS.

Also OP is talking about HDDs, so not even sure where the ZFS on SSDs discussion is coming from.

[–] minnix@lemux.minnix.dev -1 points 1 month ago (1 children)

There is no way to get acceptable IOPS out of HDDs within Proxmox. Your IO delay will be insane. You could at best stripe a ton of HDDs but even then one enterprise grade SSD will smoke it as far as performance goes. Post screenshots of your current Proxmox HDD/SSD disk setup with your ZFS pool, services, and IO delay and then we can talk. The difference that enterprise gives you is night and day.

[–] blackstrat@lemmy.fwgx.uk 1 points 1 month ago (1 children)

Are you saying SSDs are faster than HDDs?

[–] minnix@lemux.minnix.dev 0 points 1 month ago

I was asking them to post their setup so I can evaluate their experience with regards to Proxmox and disk usage.

[–] avidamoeba@lemmy.ca 5 points 1 month ago* (last edited 1 month ago) (1 children)

Not sure where you're getting that. Been running ZFS for 5 years now on bottom of the barrel consumer drives - shucked drives and old drives. I have used 7 shucked drives total. One has died during a physical move. The remaining 6 are still in use in my primary server. Oh and the speed is superb. The current RAIDz2 composed of the shucked 6 and 2 IronWolfs does 1.3GB/s sequential reads and write IOPS at 4K in the thousands. Oh and this is all happening on USB in 2x 4-bay USB DAS enclosures.

[–] blackstrat@lemmy.fwgx.uk 1 points 1 month ago (1 children)

Could this because it's a RAIDZ-2/3? They will be writing parity as well as data and the usual ZFS checksums. I am running RAID5 at the moment on my HBA card and my limit is definitely the 1Gbit network for file transfers, not the disks. And it's only me that uses this thing, it sits totally idle 90+% of the time.

[–] minnix@lemux.minnix.dev -1 points 1 month ago (1 children)

For ZFS what you want is PLP and high DWPD/TBW. This is what Enterprise SSDs provide. Everything you've mentioned so far points to you not needing ZFS so there's nothing to worry about.

[–] blackstrat@lemmy.fwgx.uk 2 points 1 month ago (1 children)

I won't be running ZFS on any solid state media, I'm using spinning rust disks meant for NAS use.

My desire to move to ZFS is bitrot prevention and as a result of this:

https://www.youtube.com/watch?v=l55GfAwa8RI

[–] minnix@lemux.minnix.dev -2 points 1 month ago (1 children)

Looking back at your original post, why are you using Proxmox to begin with for NAS storage??

[–] blackstrat@lemmy.fwgx.uk 2 points 1 month ago

The server runs Proxmox and one of the VMs runs as a fileserver. Other VMs and containers do other things.

[–] scrubbles@poptalk.scrubbles.tech 1 points 1 month ago (2 children)

No idea why you're getting downvoted, it's absolutely correct and it's called out in the official proxmox docs and forums. Proxmox logs and journals directly to the zfs array regularly, to the point of drive destroying amounts of writes.

[–] blackstrat@lemmy.fwgx.uk 3 points 1 month ago (2 children)

I'm not intending to run Proxmox on it. I have that running on an SSD, or maybe it's an NVME, I forget. This will just be for data storage mainly of photos that one VM will manage and NFS share out to other machines.

Ah I'll clarify that I set mine up next to the system drive in proxmox, through the proxmox zfs helper program. There was probably something in there that set up settings in a weird way

[–] minnix@lemux.minnix.dev -2 points 1 month ago

Yes I'm specifically referring to your ZFS pool containing your VMs/LXCs. Enterprise SSDs for that. Get them on ebay. Just do a search on the Proxmox forums for enterprise vs consumer SSD to see the problem with consumer hardware for ZFS. For Proxmox itself you want something like an NVME with DRAM, specifically underprovisioned for an unused space buffer for the drive controller to use for wear leveling.

[–] ShortN0te@lemmy.ml 3 points 1 month ago

What exactly are you referring to? ZIL? ARC? L2ARC? And what docs? Have not found that call out in the official docs.