I have two 25G fibre interfaces on each server. I want to bond these in active-backup mode. They are connected to different physical switches. There are additional 10G interfaces for users to access VMs and ProxMox itself so these are not part of the usage for the 25G interfaces.
I plan to have my Ceph replication using Switch #1 and Ceph public traffic using Switch #2.
Bonding the two interface should then allow the node to automatically fail over to the other switch in case of a switch failure. Otherwise during normal situation, replication traffic will be physically separated from client data traffic.
The ProxMox web interface does not allow me to create a bond with the FC interfaces. It seems that ProxMox checks if it is an ethernet adapter before allowing it.
However, from shell, I am able to create such a bond with different bond-primary adapters depending on the server, which appears to work as intended.
I'm not sure if this is a bug/missing feature or there is some reason why it should not be done, please advise?
I plan to have my Ceph replication using Switch #1 and Ceph public traffic using Switch #2.
Bonding the two interface should then allow the node to automatically fail over to the other switch in case of a switch failure. Otherwise during normal situation, replication traffic will be physically separated from client data traffic.
The ProxMox web interface does not allow me to create a bond with the FC interfaces. It seems that ProxMox checks if it is an ethernet adapter before allowing it.
However, from shell, I am able to create such a bond with different bond-primary adapters depending on the server, which appears to work as intended.
I'm not sure if this is a bug/missing feature or there is some reason why it should not be done, please advise?
Last edited: