We have a 3-node HA cluster running Proxmox/Ceph.
I know the current recommendation is to have separate physical network ports for VM traffic, Ceph and Corosync traffic.
We currently do this with a 4-port SFP+ NIC (Intel X710-DA4).
However, we're looking at moving to 100 Gbps. The NIC in the new server though only has a single 100 Gpbs port.
So if we have a single 100 Gbps connection between each VM node - what is the best way to split it up, and provide guarantees on bandwidth, even under contention.
I asked on r/networking, and they suggested looking into SR-IOV to split up the NIC into virtual cards.
I saw on the wiki there's a mention of it here:
https://pve.proxmox.com/wiki/PCI(e)_Passthrough#_sr_iov
However, that seems more focused on passing through virtual functions into individual VMs.
Is there some guide, or advice, or has anybody had experience with using separate VFs for VM traffic/Ceph/Corosync?
Is this the best way to do it, or is there another way?
Also - why are there two pages on PCI Passthrough - here and here
I know the current recommendation is to have separate physical network ports for VM traffic, Ceph and Corosync traffic.
We currently do this with a 4-port SFP+ NIC (Intel X710-DA4).
However, we're looking at moving to 100 Gbps. The NIC in the new server though only has a single 100 Gpbs port.
So if we have a single 100 Gbps connection between each VM node - what is the best way to split it up, and provide guarantees on bandwidth, even under contention.
I asked on r/networking, and they suggested looking into SR-IOV to split up the NIC into virtual cards.
I saw on the wiki there's a mention of it here:
https://pve.proxmox.com/wiki/PCI(e)_Passthrough#_sr_iov
However, that seems more focused on passing through virtual functions into individual VMs.
Is there some guide, or advice, or has anybody had experience with using separate VFs for VM traffic/Ceph/Corosync?
Is this the best way to do it, or is there another way?
Also - why are there two pages on PCI Passthrough - here and here