Cluster Networking Configuration - Subnets

radawson

New Member
Jan 12, 2024
4
0
1
I am putting together a 3x node cluster for my own education (before doing this for real at work!) and I have the following plan for subnets for the cluster. This setup is based on all the tutorials I've been able to find online, but I wanted to make sure I'm doing this right.

I have enough switches to handle these as LACP-enabled Linux bonds. Vmbr0 is my bridge for the client VMS.

Cluster Network.png
 
You don't need 10G for corosync. I would use the Gbit NICs for that and then use the 10Gbit NICs instead for backups/migration. So doing big backups/restores/migrations won't slow down ceph or VM traffic. Or you use that 10Gbit NIC to split ceph internal and cluster traffic.
 
Last edited:
How would I split out migrations as you describe? I thought those were done across the corosync link...?
 
The manual gives a good recommendation:
https://pve.proxmox.com/wiki/Deploy_Hyper-Converged_Ceph_Cluster#_recommendations_for_a_healthy_ceph_cluster said:
If unsure, we recommend using three (physical) separate networks for high-performance setups: * one very high bandwidth (25+ Gbps) network for Ceph (internal) cluster traffic. * one high bandwidth (10+ Gpbs) network for Ceph (public) traffic between the ceph server and ceph client storage traffic. Depending on your needs this can also be used to host the virtual guest traffic and the VM live-migration traffic. * one medium bandwidth (1 Gbps) exclusive for the latency sensitive corosync cluster communication.
https://pve.proxmox.com/wiki/Manual:_datacenter.cfg said:
migration: [type=]<secure|insecure> [,network=<CIDR>]
For cluster wide migration settings.

network=<CIDR>
CIDR of the (sub) network that is used for migration.
type=<insecure | secure> (default =secure)
Migration traffic is encrypted using an SSH tunnel by default. On secure, completely private networks this can be disabled to increase performance.
For backups you put your PBS host in a dedicated subnet with your PVe hosts.
 
  • Like
Reactions: takeokun
OK, thanks for your help and the refences. Would it be worth my time to figure out Infiniband for Ceph? I have access to a Mellanox SX6036 switch that should be capable of at least 56Gb per port. I would need adapters, of course... I've heard Infiniband can be painful, though.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!