ceph public bonded trunk interface

lbatty

New Member
Apr 29, 2024
2
1
3
I've been working setting up a fully redundant proxmox cluster with 3 nodes, each node has 4 x 25gb ports and 4 x 1gb ports. I have ceph cluster network on it's own 25gb port that is a linux bond with a 2nd 25gb port and both go to separate switches. I then have a trunk port which carries ceph public and also my other vlans for VMs etc, this is also on a linux bond for the other 2 x 25gb ports to separate switches again. The ceph private fails over okay, but the ceph public whenever i try to failover the ceph manager and monitor go offline on that node and all OSDs become unavailable but the ceph public ip address is still pingable. I have tried the interface in many bond modes but none of them seem to work in the way I would like where if 1 link fails the other takes over. Anyone got this type of setup to work?
 
Last edited:
I have ceph cluster 2x10G (public) + 2x10G (ceph) via openvswtich connected to stack switches, no problems.
 
Last edited:
Turns out it was an MTU problem, I had MTU set to 9000 and 9216 on switches. Once reset to 1500 on both worked fine
 
  • Like
Reactions: pvps1