this post was submitted on 19 Mar 2024
12 points (92.9% liked)

homelab

6646 readers
11 users here now

founded 4 years ago
MODERATORS
 

The majority of my homelab consists of two servers: A Proxmox hypervisor and a TrueNAS file server. The bulk of my LAN traffic is between these two servers. At the moment, both servers are on my "main" VLAN. I have separate VLANs for guests and IoT devices, but everything else lives on VLAN2.

I have been considering the idea of creating another VLAN for storage, but I'm debating if there is any benefit to this. My NAS still needs to be accessible to non-VLAN-aware devices (my desktop PC, for instance), so from a security standpoint, there's not much benefit; it wouldn't be isolated. Both servers have a 10Gb DAC back to the switch, so bandwidth isn't really a factor; even if it was, my switch is still only going to switch packets between the two servers; it's not like it's flooding the rest of my network.

Having a VLAN for storage seems like it's the "best practice," but since both servers still need to be accessible outside the VLAN, the only benefit I can see is limiting broadcast traffic, and as far as I know (correct me if I'm wrong), SMB/NFS/iSCSI are all unicast.

all 8 comments
sorted by: hot top controversial new old
[–] AlternateRoute@lemmy.ca 5 points 8 months ago (1 children)
  • If having dedicated interfaces / subnets is needed for improved bandwidth
  • If you want to have a network segment with jumbo frames (you often don't want this on the general network interfaces)
  • If the network protocol creates substantial network noise iSCSI / block level protocols tend to be very noisy. Vs file level ones like SMB or NFS
[–] corroded@lemmy.world 1 points 8 months ago (1 children)

I do use iSCSI between my Proxmox server any my TrueNAS server. What do you mean by "noise," exactly? My understanding is that because iSCSI isn't broadcasting, my switch is only going to transfer packets between the two servers, which would prevent any "noise" from making it out to other devices on other ports.

[–] AlternateRoute@lemmy.ca 2 points 8 months ago

iSCSI is block level storage where NFS/SMB are file level.. When you browse a folder with SMB/NFS it is going to ask the remote service for the meta file list then cache the whole thing till it thinks it needs it refreshed.. iSCSI is going to go read a set of blocks to read the metadata off the remote file system. iSCSI can be considerably more chatty between the two hosts as it is lower level.

[–] transientpunk@sh.itjust.works 2 points 8 months ago (1 children)

Check out Lawrence Systems on YouTube. He just released a video that talks about this very subject.

[–] corroded@lemmy.world 1 points 8 months ago

Thanks for the suggestion. I've watched a few of his videos in the past, but I don't think I ever subscribed.

[–] TCB13@lemmy.world 1 points 8 months ago

Having a VLAN for storage seems like it’s the “best practice,” but since both servers still need to be accessible outside the VLAN, the only benefit I can see is limiting broadcast traffic, and as far as I know (correct me if I’m wrong), SMB/NFS/iSCSI are all unicast.

Having a VLAN for storage in your case is totally pointless. The traffic is still going to the same switch with added overhead of having to deal with VLAN tags and whatnot.

[–] MystikIncarnate@lemmy.ca 1 points 8 months ago

I do it because I don't want to run short of IP space.

I've worked on networks that are reaching the limit of how many systems they can hold, and I don't want that to happen, so I intentionally oversize basically every subnet and usually over segregate the traffic. I use a lot of subnets.

They're not all VLANs, some are on independent switches. What I did for storage in one case is gave a single NIC to the management Network for administration, and the rest connected to a storage subnet with fully dedicated links. I was using the same switch so they were vlanned but it easily could have been done on another switch. The connections from the storage to the compute systems was all done with dedicated links on dedicated NICs, so 100% of the bandwidth was available for the storage connections.

I'm very sensitive to bottlenecks in my layer 2 networks and I don't want to share bandwidth between a production interface and a storage interface. NICs are cheap. My patience is not.