Planing new hardware for 10Gib network + make use of sr-iov

May 16, 2013
25
0
21
hi,

at the moment, all of our IBM X3550M3 / X3750M4 / Suns ... Proxmox (3.2) nodes runs with 2 x 1Gbit/s for ISCSI and cluster communication (multipath) and 2 x 1Gib/s for the external communication (bonding LACP).

We planning a new network with 10Gib/s with the following hardware:

  • For every Node Intel X520-da2 dual port SFP+
  • Cisco WS-C4500X-F-32SFP+
  • WS-X45-SUP7L-E backplane
  • SFP-10G-SR SFP+ modules

So, every server is connected with 2 x 10Gib/s. For a better balancing I would like to make use of the sr-iov feature from the network cards. I plan to create a bond interface and a few "functions" (intel speach for the "virtual" network interfaces) for some V-LANs, like cluster communications and external VM traffic.

So, the only part I don't now: are the X520-da2 cards a good choice.

any suggestions?
 
Using VFs for PCIe passthrough, you will lose live migration...
Don't see any reasons to use VFs for host communications. You can do it all with software.
But i'd like to know how bonding will work on top of SR-IOV intefaces. ;)
And yes, Intel's X520 is best 10G SFP+ adaper I know.
 
Also may be interesting (from ixgbe's README):
Software bridging does not work with SR-IOV Virtual Functions-------------------------------------------------------------
SR-IOV Virtual Functions are unable to send or receive traffic between VMs
using emulated connections on a Linux Software bridge and connections that use
SR-IOV VFs.