Optimal network for Proxmox and Ceph

starnetwork

Renowned Member
Dec 8, 2009
422
10
83
Hi,
we plan to order Hardware with 3 network adapters in enclosure
2x 10GB Switches
1x 100Gbps Switch
this is the max size we have so we can't add additional switch, now:
we using 2x 10GB for redundant proxmox network and Internet connection via Link aggrigation
and we plan to use the 100Gb Switch for Ceph network
the question is how can we add some "backup" network for ceph, based on the existing 2x 10GB that already in used as IEEE 802.3ad Dynamic link aggregation bond
it's possible to use vlan from this 2x 10GB as second adapter and bond it with the 100GB switch?
another solution for best Performance and availability with this hardware?

Regards,
 
2x 10GB Switches
1x 100Gbps Switch
And 2x 1GbE for Corosync, as it needs stable and low latency.

we using 2x 10GB for redundant proxmox network and Internet connection via Link aggrigation
So your client, management, backup and migration traffic are pushed through 2x10GbE.

the question is how can we add some "backup" network for ceph, based on the existing 2x 10GB that already in used as IEEE 802.3ad Dynamic link aggregation bond
it's possible to use vlan from this 2x 10GB as second adapter and bond it with the 100GB switch?
With the above, if you are using more the 20 GbE on you Ceph cluster, you will just kill your front end. If not, then why use 100GbE in the first place?

Besides my comments above, if your cluster is 3-5 nodes and you don't intend it to grow further, then you may think about a full mesh and get rid of the switch. Less SPoF and better latency.
https://pve.proxmox.com/wiki/Full_Mesh_Network_for_Ceph_Server

If you need the switch for cluster sizing, then better go with two 40GbE switches get redundancy, if another 100GbE is no option. With more nodes, the ceph traffic will distribute more and a 100GbE might not be needed after all.

For more information (if you haven't seen already), see our Ceph benchmark paper (PDF) and user comparisons.
https://forum.proxmox.com/threads/proxmox-ve-ceph-benchmark-2018-02.41761/
 
Hi Alwin,
thanks for your answer!
yes, the Idea is to use Corosync, Internet, management, backup and migration traffic on 2x10GB
about the 100GB, my alternative is 2x25GB, it's better than 1x100gb?
both 10GB switches are law latency

Regards,
 
yes, the Idea is to use Corosync, Internet, management, backup and migration traffic on 2x10GB
As I wrote, Corosync needs low and stable latency. Any other traffic on the same physical interface will interfere and with HA even kill your cluster.

about the 100GB, my alternative is 2x25GB, it's better than 1x100gb?
What?

both 10GB switches are law latency
Whatever low is. But switches are not the only part for the corosync network.

Aside the above, you can try whichever setup you like, but I persist with my statement about network separation and latency.
 
Hi Alwin,
thanks for your detailed answers!
the spec we currently debating are: https://www.supermicro.com/products/SuperBlade/enclosure
Enclosure SBE-820C with 2x10Gbe + 1x100G EDR IB
vs
Enclosure SBE-820J with 2x25Gbe + 2x10Gbe

both will run 20 nodes with Proxmox and Ceph and also will use External storage via iSCSI / nfs

in both cases we want to separate with vlan for each of the following next work
Internet network
Internal network between nodes
Proxmox ring0 Cluster
Proxmox ring1 Cluster
Proxmox Ceph network
Proxmox External Storage network
Proxmox Backups (and migrations?)

any feedback for the enclosure and network will be very helpful, thanks!

Regards,
 
Doesn't matter which one you take, it is not feasible to use them. There is no network separation possible. With 20 nodes there will be a lot of traffic, that will interfere with one another, VLAN or not.

Ceph speaks Ethernet, so the bandwidth on IB is halved.
 
Thanks again Alwin,
any suggestion for this structure?
adding additional external switch, so will be total of separated networks is better?
any other suggesting for best Network settings for Proxmox?

Regards,
 
any suggestion for this structure?
Don't buy it.

adding additional external switch, so will be total of separated networks is better?
No. The blades do not have enough ports, even if you could connect them directly to external switches.

any other suggesting for best Network settings for Proxmox?
If you separate, storage, client (+ migration/backup) and corosync (+ management), then you need at least 6x NIC ports for each server. At least two redundant switches, with 10x 6 ports (active-backup setup). With the demand on storage bandwidth and IO/s especially for Ceph, each switch needs to be able to handle all traffic.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!