this post was submitted on 26 Mar 2024
13 points (100.0% liked)

homelab

6646 readers
31 users here now

founded 4 years ago
MODERATORS
 

I was found a listing on eBay for a "Mellanox CX354A ConnectX-3 FDR Infiniband 40GbE QSFP+" card for quite cheap. By the sound of the listing title it supports both infiniband and 40GbE, is that right? I would like to try out infiniband, but I would be buying for the 40GbE. And are there good drivers for modern linux distros for this card? Also, do I just buy some QSFP cables to direct attach them?

you are viewing a single comment's thread
view the rest of the comments
[โ€“] litchralee@sh.itjust.works 7 points 8 months ago* (last edited 8 months ago) (1 children)

I only have experience with Mellanox CX-5 100Gb cards at work, but my understanding is that mainline Linux has good support for the entire CX lineup. That said, newer kernel versions -- starting at maybe 5.4? -- will have all sorts of bug fixes, so hopefully your preferred distro has built with those driver modules included, or loadable.

As for Infiniband (IB), I think you'd need transceivers with specific support for IB. That Ethernet and IB share the (Q)SFP(+) modular connector does not guarantee compatibility, although a quick web search shows a number of transceivers and DACs that explicitly list support for both.

That said, are you interested in IB fabrics or what they can enable? One use-case native to IB is RDMA, but has since been brought to -- so called "Converged" -- Ethernet in the form of RoCE, in support of high-performance storage technologies like SPDK that enable things like NVMe storage over the network.

If all you're looking for are the semantics of IB, and you're only ever going to have two nodes that are direct-attached, then the Linux fabric abstractions can be used the same way you'd use IB. The debate of Converged Ethernet (CE) vs IB is more about whether/how CE switches can uphold the same guarantees that an IB fabric would. Direct attachment avoids these concerns outright.

So I think perhaps you can get normal 40 Gb Ethernet DACs to go with these, and still have the ability to play with fabric abstractions atop Ethernet (or IP if you use RoCE v2, but that's not available on the CX-3).

Just bear in mind that IB and fabrics in general will get complicated very quickly, because they're meant to support cluster or converged computing, which try to make compute and storage resources uniformly accessible. So while you can use fabrics to transport a whole NVMe namespace from a NAS to a client machine with near line-rate performance, or set up some incredible RPC bindings between two machines, there may be a large learning curve to achieve these.

[โ€“] thomasdouwes@sopuli.xyz 4 points 8 months ago

Thanks for the detailed reply. I found a QSFP+ DAC that says it supports IB and Ethernet.
I don't have enough computers to set up a fabric, only the 2 I would be direct attaching have PCIE slots.

I've never used infiniband before so my reason for wanting to try it is just to learn what it is, and how it works. That said, some of those use-cases look very interesting, especially transporting NVMe namespaces, I didn't know that was possible.