Hi all,
pretty new to Proxmox and Ceph. We've been running a test cluster on three nodes, each node on Gigabit network for the Ceph network as well and so far we are satisfied with performance and resiliency. So we're planning to deploy a production cluster soon.
The new cluster is starting with 6 nodes, each one hosting 4x1TB SSDs for OSD. I plan to scale it to possibly 30+ nodes.
I would like to setup each node's network cards like this:
pretty new to Proxmox and Ceph. We've been running a test cluster on three nodes, each node on Gigabit network for the Ceph network as well and so far we are satisfied with performance and resiliency. So we're planning to deploy a production cluster soon.
The new cluster is starting with 6 nodes, each one hosting 4x1TB SSDs for OSD. I plan to scale it to possibly 30+ nodes.
I would like to setup each node's network cards like this:
- VM public bridge: 2x1GBps NICs, connected to distinct switches
- Management (for node reachability/management): 1Gbps NIC connected to dedicated switch
- OOB (iLO) network: dedicated port on server, connected to dedicated switch
- Ceph public network: 2x10Gbps, each one connected to a different switch and with distinct IP address (e.g. 10.0.0.1/24 and 10.0.1.1/24)
- Ceph cluster network: 1xGbps, connected to dedicated switch
- Can I actually set up the Ceph public network on two distinct network trunks? Or am I forced to use a meshed setup like this?
- Is 1Gbps enough for Ceph cluster network (OSD replication + heartbeat according to the Proxmox VE Administration Guide)?
- Should I create a separate cluster network for corosync?
Last edited: