- If having dedicated interfaces / subnets is needed for improved bandwidth
- If you want to have a network segment with jumbo frames (you often don't want this on the general network interfaces)
- If the network protocol creates substantial network noise iSCSI / block level protocols tend to be very noisy. Vs file level ones like SMB or NFS
homelab
I do use iSCSI between my Proxmox server any my TrueNAS server. What do you mean by "noise," exactly? My understanding is that because iSCSI isn't broadcasting, my switch is only going to transfer packets between the two servers, which would prevent any "noise" from making it out to other devices on other ports.
iSCSI is block level storage where NFS/SMB are file level.. When you browse a folder with SMB/NFS it is going to ask the remote service for the meta file list then cache the whole thing till it thinks it needs it refreshed.. iSCSI is going to go read a set of blocks to read the metadata off the remote file system. iSCSI can be considerably more chatty between the two hosts as it is lower level.
Check out Lawrence Systems on YouTube. He just released a video that talks about this very subject.
Thanks for the suggestion. I've watched a few of his videos in the past, but I don't think I ever subscribed.
Having a VLAN for storage seems like it’s the “best practice,” but since both servers still need to be accessible outside the VLAN, the only benefit I can see is limiting broadcast traffic, and as far as I know (correct me if I’m wrong), SMB/NFS/iSCSI are all unicast.
Having a VLAN for storage in your case is totally pointless. The traffic is still going to the same switch with added overhead of having to deal with VLAN tags and whatnot.
I do it because I don't want to run short of IP space.
I've worked on networks that are reaching the limit of how many systems they can hold, and I don't want that to happen, so I intentionally oversize basically every subnet and usually over segregate the traffic. I use a lot of subnets.
They're not all VLANs, some are on independent switches. What I did for storage in one case is gave a single NIC to the management Network for administration, and the rest connected to a storage subnet with fully dedicated links. I was using the same switch so they were vlanned but it easily could have been done on another switch. The connections from the storage to the compute systems was all done with dedicated links on dedicated NICs, so 100% of the bandwidth was available for the storage connections.
I'm very sensitive to bottlenecks in my layer 2 networks and I don't want to share bandwidth between a production interface and a storage interface. NICs are cheap. My patience is not.