ceph public bonded trunk interface

lbatty

New Member
Apr 29, 2024
2
1
3
I've been working setting up a fully redundant proxmox cluster with 3 nodes, each node has 4 x 25gb ports and 4 x 1gb ports. I have ceph cluster network on it's own 25gb port that is a linux bond with a 2nd 25gb port and both go to separate switches. I then have a trunk port which carries ceph public and also my other vlans for VMs etc, this is also on a linux bond for the other 2 x 25gb ports to separate switches again. The ceph private fails over okay, but the ceph public whenever i try to failover the ceph manager and monitor go offline on that node and all OSDs become unavailable but the ceph public ip address is still pingable. I have tried the interface in many bond modes but none of them seem to work in the way I would like where if 1 link fails the other takes over. Anyone got this type of setup to work?
 
Last edited:
I have ceph cluster 2x10G (public) + 2x10G (ceph) via openvswtich connected to stack switches, no problems.
 
Last edited:
Turns out it was an MTU problem, I had MTU set to 9000 and 9216 on switches. Once reset to 1500 on both worked fine
 
  • Like
Reactions: pvps1

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!