this post was submitted on 03 Oct 2024
47 points (96.1% liked)

Selfhosted

40347 readers
366 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 1 year ago
MODERATORS
47
Anyone running ZFS? (lemmy.fwgx.uk)
submitted 1 month ago* (last edited 1 month ago) by blackstrat@lemmy.fwgx.uk to c/selfhosted@lemmy.world
 

At the moment I have my NAS setup as a Proxmox VM with a hardware RAID card handling 6 2TB disks. My VMs are running on NVMEs with the NAS VM handling the data storage with the RAIDed volume passed through to the VM direct in Proxmox. I am running it as a large ext4 partition. Mostly photos, personal docs and a few films. Only I really use it. My desktop and laptop mount it over NFS. I have restic backups running weekly to two external HDDs. It all works pretty well and has for years.

I am now getting ZFS curious. I know I'll need to IT flash the HBA, or get another. I'm guessing it's best to create the zpool in Proxmox and pass that through to the NAS VM? Or would it be better to pass the individual disks through to the VM and manage the zpool from there?

top 50 comments
sorted by: hot top controversial new old
[–] paperd@lemmy.zip 25 points 1 month ago (8 children)

If you want multiple VMs to use the storage on the ZFS pool, better to create it in proxmox rather than passing raw disks thru to the VM.

ZFS is awesome, I wouldn't use anything else now.

[–] blackstrat@lemmy.fwgx.uk 1 points 1 month ago

What I have now is one VM that has the array volume passed through and the VM exports certain folders for various purposes to other VMs. So for example, my application server VM has read access to the music folder so I can run Emby. Similar thing for photos and shares out to my other PCs etc. This way I can centrally manage permissions, users etc from that one file server VM. I don't fancy managing all that in Proxmox itself. So maybe I just create the zpool in Proxmox, pass that through to the file server VM and keep the management centralised there.

load more comments (7 replies)
[–] scrubbles@poptalk.scrubbles.tech 6 points 1 month ago (3 children)

I did on proxmox. One thing I didn't know about ZFS, it has a lot of random writes, I believe logs and journaling. I killed 6 SSDs in 6 months. It's a great system - but consumer SSDs can't handle it.

[–] blackstrat@lemmy.fwgx.uk 10 points 1 month ago

Did you have atime on?

[–] ShortN0te@lemmy.ml 10 points 1 month ago

I use a consumer SSD for caching on ZFS now for over 2 years and do not have any issues with it. I have a 54 TB pool with tons of reads and writes and no issue with it.

smart reports 14% used.

[–] avidamoeba@lemmy.ca 4 points 1 month ago* (last edited 1 month ago)

That doesn't sound right. Also random writes don't kill SSDs. Total writes do and you can see how much has been written to an SSD in its SMART values. I've used SSDs for swap memory for years without any breaking. Heavily used swap for running VMs and software builds. Their total bytes written counters were increasing steadily but haven't reached the limit and haven't died despite the sustained random writes load. One was an Intel MacBook onboard SSD. Another was a random Toshiba OEM NVMe. Another was a Samsung OEM NVMe.

[–] NeoNachtwaechter@lemmy.world 5 points 1 month ago* (last edited 1 month ago) (1 children)

better to pass the individual disks through to the VM and manage the zpool from there?

That's what I do.

I like it better this way, because less dependencies.

Proxmox boots from it's own SSD, the VM that provides the NAS lives there, too.

The zpool (consisting of 5 good old harddisks) can be easily plugged somewhere else if needed, and it carries the data of the NAS, but nothing else. I can rebuild the proxmox base, I can reinstall that VM, they all do not affect each other.

[–] blackstrat@lemmy.fwgx.uk 1 points 1 month ago

Good point. Having a small VM that just needs the HBA passed through sounds like the best idea so far. More portable and less dependencies.

[–] BlueEther@no.lastname.nz 4 points 1 month ago (1 children)

I run proxmox and a trunas VM.

  • TrueNAS is on a virt disk on a NVME drive with all the other VMs/LXCs
  • I pass the HBA through to TrueNAS with PCI passthrough: 6 disk Raid z2. this is 'vault' and has all my backups of hone dirs and photos etc
  • I pass through two HDs as raw disks for bulk storage (of linux ISOs): 2 disk Mirrored zfs

Seems to work well

[–] blackstrat@lemmy.fwgx.uk 1 points 1 month ago (1 children)

I'm starting to think this is the way to do it because it loses the dependency on Proxmox to a large degree.

[–] minnix@lemux.minnix.dev 1 points 1 month ago (1 children)

Yes you don't need Proxmox for what you're doing.

[–] blackstrat@lemmy.fwgx.uk 1 points 1 month ago

I was thinking Proxmox would add a layer between the raw disks and the VM that might interfere with ZFS, in a similar way how a non IT more HBA does. From what I understand now, the passthrough should be fine.

[–] Mio@feddit.nu 2 points 1 month ago (1 children)

I am more looking into BTRF for backup due to I run Linux and not BSD ZFS requires more RAM I only have one disk I want to benefit from snapshots, compression and deduplication.

[–] blackstrat@lemmy.fwgx.uk 0 points 1 month ago (1 children)

I used btrfs once. Never again!

[–] Mio@feddit.nu 1 points 1 month ago (1 children)
[–] blackstrat@lemmy.fwgx.uk 4 points 1 month ago (2 children)

It stole all my data. It's a bit of a clusterfuck of a file system, especially one so old. This article gives a good overview: https://arstechnica.com/gadgets/2021/09/examining-btrfs-linuxs-perpetually-half-finished-filesystem/ It managed to get into a state where it wouldn't even let me mount it readonly. I even resorted to running commands of which the documentation just said "only run this if you know what you're doing", but actually gave no guidance to understand - it was basically a command for the developer to use and noone else. It ddn't work anyway. Every other system that was using the same disks but with ext4 on their filesystems came back and I was able to fsck them and continue on. I think they're all still running without issue 6 years later.

For such an old file system, it has a lot of braindead design choices and a huge amount of unreliability.

[–] Mio@feddit.nu 1 points 1 month ago* (last edited 1 month ago) (1 children)

Dataloss is never fun. File systemet in general need a long time to iron out all the bugs. Hope it is in a better state today. I remember when ext4 was new and crashed in a laptop. Ubuntu was to early to adopt it, or I did not use LTS.

But as always, make sure to have a proper backup on a different physical location.

[–] zingo@sh.itjust.works 1 points 1 month ago (1 children)

Found a Swede in this joint! Cheers.

[–] Mio@feddit.nu 1 points 1 month ago* (last edited 1 month ago) (1 children)

You will find many more at feddit.nu

[–] zingo@sh.itjust.works 1 points 2 weeks ago

Yes I'm sure.

Not really searching for 'em though. :)

[–] snugglebutt@lemmy.blahaj.zone 1 points 1 month ago

'short for "B-Tree File System"'. maybe i should stop reading it as butterfucks

[–] avidamoeba@lemmy.ca 2 points 1 month ago* (last edited 1 month ago) (1 children)

Yes we run ZFS. I wouldn't use anything else. It's truly incredible. The only comparable choice is LVMRAID + Btrfs and it still isn't really comparable in ease of use.

[–] Chewy7324@discuss.tchncs.de 2 points 1 month ago (1 children)

Why LVM + BTRFS instead of only using btrfs? Unless you need RAID 5/6, which doesn't work well on btrfs.

[–] avidamoeba@lemmy.ca 2 points 1 month ago

Unless you need RAID 5/6, which doesn’t work well on btrfs

Yes. Because they're already using some sort of parity RAID so I assume they'd use RAID in ZFS/Btrfs and as you said, that's not an option for Btrfs. So LVMRAID + Btrfs is the alternative. LVMRAID because it's simpler to use than mdraid + LVM and the implementation is still mdraid under the covers.

[–] walden@sub.wetshaving.social 2 points 1 month ago* (last edited 1 month ago)

I use zfs with Proxmox. I have it as a bind mount to Turnkey Fileserver (a default lxc template).

I access everything through NFS (via turnkey Fileserver). Even other VMs just get the NFS added to the fstab file. File transfers happen extremely fast VM to VM, even though it's "network" storage.

This gives me the benefits of zfs, and NFS handles the "what if's", like what if two VMs access the same file at the same time. I don't know exactly what NFS does in that case, but I haven't run into any problems in the past 5+ years.

Another thing that comes to mind is you should make turnkey Fileserver a privileged container, so that file ownership is done through the default user (1000 if I remember correctly). Unprivileged uses wonky UIDs which requires some magic config which you can find in the docs. It works either way, but I chose the privileged route. Others will have different opinions.

[–] ikidd@lemmy.world 2 points 1 month ago

Most NAS VMs want you to pass them the raw device so they can manage ZFS themselves. For every other VM, I have the VM running on ZFS storage that Proxmox uses and manages, and it will manage the datasets for backup, snapshots, etc.

It is definitely the way to go. The ability to snapshot a VM or CT before updates alone is worth it.

[–] corsicanguppy@lemmy.ca 2 points 1 month ago

I'm running ZFS at two jobs and my homelab.

Terabytes and terabytes. Usually presented to the hypervisor as a lun and managed on the VM itself.

I don't run proxmox, though. Some ldoms, some esx, soon oVirt.

[–] TheHolm@aussie.zone 1 points 1 month ago (1 children)

both works. Just do not forgot to assign fake serial numbers if you are passing disks. IMHO passing disk will be more performant, or may be just pass HBA controller if other disks are on different controller.

[–] blackstrat@lemmy.fwgx.uk 1 points 1 month ago (1 children)
[–] TheHolm@aussie.zone 1 points 1 month ago

to stop guessing what HDD to replace when one failed. VM can't see actual HDDs as SMART is not getting forwarded.

[–] possiblylinux127@lemmy.zip 1 points 1 month ago

I use ZFS but you need to be very aware of its problems

Learn zpool

load more comments
view more: next ›