this post was submitted on 14 Dec 2023
25 points (93.1% liked)

Selfhosted

40329 readers
404 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 1 year ago
MODERATORS
 

EDIT: SOLUTION:

Nevermind, I am an idiot. As @ClickyMcTicker pointed out, it's the client side that is causing the trouble. His comment gave me thought so I checked my testing procedure again. Turns out that, completely by accident, everytime I copied files to the LVM-based NAS, I used the SSD on my PC as the source. In contrast, everytime I copied to the ZFS-based NAS, I used my hard drive as the source. I did that about 10 times. Everything is fine now. Maybe this can help some other dumbass like me in the futere. Thanks everyone!

Hello there.

I'm trying to setup a NAS on Proxmox. For storage, I'm using a single Samsung Evo 870 with 2TB (backups will be done anyway, no need for RAID). In order to do this, I setup a Debian 12 container, installed Cockpit and the tools needed to share via SMB. I set everything up and transfered some files: about 150mb/s with huge fluctuations. Not great, not terrible. Iperf reaches around 2.25Gbit/s, so something is off. Let's do some testing. I started with the filesystem. This whole setup is for testing anyway.

  1. Storage via creating a directory with EXT4, then adding a mount point to the container. This is what gave me those speeds mentioned above. Okay, not good. --> 150mb/s, speed fluctuates
  2. a Let's do ZFS, which I want to use anyway. I created a ZFS pool with ashift=12, atime=off, compression=lz4, xattr=sa and 1MB record size. I did "some" research and this is what I came up with, please correct me. Mount to container, and go. --> 170mb/s, stable speed
  3. b Tried OpenMediaVault and used EXT4 with ZFS as base for the VM-Drive. --> around 200mb/s
  4. LVM-Thin using Proxmox GUI, then mount to container. --> 270mb/s, which is pretty much what I'm reaching with Iperf.

So where is my mistake when using ZFS? Disable compression? A different record size? Any help would be appreciated.

top 14 comments
sorted by: hot top controversial new old
[–] scrubbles@poptalk.scrubbles.tech 11 points 11 months ago (1 children)

I had to learn the hard way that SMB is single threaded, meaning that you're probably butting up against one core of your system. I bet if you look at who is sending the data you'll see one core is pinned at 100%

[–] Pete90@feddit.de 3 points 11 months ago

I don't think it's the CPU as I am able to reach max speed, just not using ZFS...

[–] MangoPenguin@lemmy.blahaj.zone 5 points 11 months ago (1 children)

Have you benchmark the disk locally directly on the proxmox host? Need to figure out if this is an IO limitation, CPU limitation, or something else.

[–] Pete90@feddit.de 2 points 11 months ago* (last edited 11 months ago) (1 children)

Good point. I used fio with different block sizes:

fio --ioengine=libaio --direct=1 --sync=1 --rw=read --bs=4K --numjobs=1 --iodepth=1 --runtime=60 --time_based --name seq_read --filename=/dev/sda

4K = IOPS=41.7k, BW=163MiB/s (171MB/s)
8K = IOPS=31.1k, BW=243MiB/s (254MB/s)
IOPS=13.2k, BW=411MiB/s (431MB/s)
512K = IOPS=809, BW=405MiB/s (424MB/s)
1M = IOPS=454, BW=455MiB/s (477MB/s)

I'm gonna be honest though, I have no idea what to make of these values. Seemingly, the drive is capable of maxing out my network. The CPU shouldn't be the problem, it's a i7 10700.

[–] MangoPenguin@lemmy.blahaj.zone 1 points 11 months ago (1 children)

Basically you're getting 477MB/s for a sequential read, which is spot on for a SATA SSD.

What size are the files you were transferring when you only got 150Mbps? Also did you mean Mb/s or MB/s? There's an 8x difference between the two.

[–] Pete90@feddit.de 1 points 11 months ago (2 children)

I meant mega byte (I hope that's correct I always mix them up). I transferred large videos files, both when the file system was zfs or lvm, yet different transfer speeds. The files were between 500mb to 1.5gb in size

[–] NeoNachtwaechter@lemmy.world 1 points 11 months ago* (last edited 11 months ago) (1 children)

ZFS compression is costing some CPU power for sure. How many cores/threads does your CPU have?

And if it is mostly video files: they are already compressed heavily, so you don't gain anything with another layer of compression.

[–] Pete90@feddit.de 1 points 11 months ago

Its videos, pictures, music and other data as well. I'll try playing around with compression today, see if disabeling helps at all. The CPU has 8C/16T and the container 2C/4T.

[–] ClickyMcTicker@hachyderm.io 0 points 11 months ago (2 children)

@Pete90 @MangoPenguin Bytes (B) are used for storage, bits (b) are used for network. 1B=8b.
2.5Gbps equals 312.5MBps.
With that in mind, there are a lot of moving parts to diagnose, assuming you want to reach that speed for a transfer. Can the storage of both machines reach that speed? I believe I saw the NAS’s disk tested and clocked at 470ish MBps, but can the client side keep up? I saw the iPerf test, but what was the exact command used? Did you multithread it?

[–] Pete90@feddit.de 1 points 11 months ago* (last edited 11 months ago)

Nevermind, I am an idiot. You're comment gave me thought and so I checked my testing procedure again. Turns out that, completly by accident, everytime I copied files to the LVM-based NAS, I used the SSD on my PC as the source. In contrast, everytime I copied to the ZFS-based NAS, I used my hard driver as the source. I did that about 10 times. Everything is fine now. THANKS!

[–] Pete90@feddit.de 1 points 11 months ago

Both machines are easily capable of reaching around 2.2Gbps. I can't reach full 2.5Gbps speed even with Iperf. I tried some tuning but that didn't help, so its fine for now. I used iperf3 -c xxx.xxx.xxx.xxx, nothing else.

The slowdown MUST be related to ZFS, since LVM as a storage base can reach the "full" 2.2Gbps when used as a smb share.

[–] NeoNachtwaechter@lemmy.world 1 points 11 months ago (1 children)

For storage, I'm using a single Samsung Evo 870 with 2TB (backups will be done anyway, no need for RAID). In order to do this, I setup a Debian 12 container, installed Cockpit and the tools needed to share via SMB.

I wonder who owns the storage harddisk: the proxmox host or the Debian VM?

Maybe that's worth another try in your test setup: pass this physical device to the Debian VM, in order to eliminate possible losses from the virtualization.

https://pve.proxmox.com/wiki/Passthrough_Physical_Disk_to_Virtual_Machine_(VM)

[–] Pete90@feddit.de 1 points 11 months ago

The disk is owned by to PVE host and then given to the container (not a VM) as a mount point. I could use PCIe passthrough, sure, but using a container seems to be the more efficient way.

[–] Decronym@lemmy.decronym.xyz 1 points 11 months ago* (last edited 11 months ago)

Acronyms, initialisms, abbreviations, contractions, and other phrases which expand to something larger, that I've seen in this thread:

Fewer Letters More Letters
LVM (Linux) Logical Volume Manager for filesystem mapping
NAS Network-Attached Storage
PCIe Peripheral Component Interconnect Express
RAID Redundant Array of Independent Disks for mass storage
SATA Serial AT Attachment interface for mass storage
SSD Solid State Drive mass storage
ZFS Solaris/Linux filesystem focusing on data integrity

7 acronyms in this thread; the most compressed thread commented on today has 13 acronyms.

[Thread #355 for this sub, first seen 14th Dec 2023, 21:45] [FAQ] [Full list] [Contact] [Source code]