Network beginner questions

st6f9n

Active Member
Feb 15, 2019
23
1
43
Hello,

I'm a total beginner to Proxmox/Ceph, I have the following hardware available
(no more changes are possible):

6 x Supermicro Server: 2x10GB NICs, 4x1GB NICs, 2x256GB NVMEs, 2x2TB NVMEs, 256GB RAM
2 x Switch: Netgear ProSAFE XS728T

Here my ideas:

XS728T is not a stacking switch, so I can't use MLAG. I see have only the choice
between LAG at one switch or active-backup bonding with two switches,
because no other bonding mode is possible (why ?). I plan to have 3 separated networks
(3 VLANs): Corosync Cluster Network, Ceph Storage Network, Proxmox Client Network

Proxmox Client Network:
* External Switch #3 (outside of our responsibility): LAG with 2x1GB NICs

Corosync Cluster Network:
* Switch #1: 1x 1GB NIC (passive RRP mode)
* Switch #2: 1x 1GB NIC (passive RRP mode)

Ceph Storage Network:

Solution No.1:
* Switch #1: LAG with 2x10GB NICs (Switch #2 is also configured for this)
* Switch #2: 1x1GB NIC (active-backup bonding with LAG from Switch #1,
for a short time solution)
* When Switch #1 fails, I will manually change the LAG with 2x10GB NICs to Switch #2

Solution No.2:
* Switch #1: 1x10GB NIC (active-backup bonding)
* Switch #2: 1x10GB NIC (active-backup bonding)

I think Solution No.1 will be better for latency, Solution No.2 better for redundancy.


Any suggestions ? Thanks !

Best regards

Stefan
 
It's a bit confusing:
* Proxmox public network: For the VMs (?)
* Ceph public network ("highly recommended")
* Ceph cluster network (optional): Does it make sense concerning my hardware configuration to make a separate network (10 GB Switch) ?
* Corosync cluster network: Could it be same like "Ceph cluster network" ? Is it necessary concerning my hardware configuration ?
 
Last edited:
Proxmox Client Network:
* External Switch #3 (outside of our responsibility): LAG with 2x1GB NICs

Corosync Cluster Network:
* Switch #1: 1x 1GB NIC (passive RRP mode)
* Switch #2: 1x 1GB NIC (passive RRP mode)

Good idea, just configuring two links (ring in the config file), one one each NIC-Switch combination is fine. https://pve.proxmox.com/pve-docs/pve-admin-guide.html#pvecm_redundancy shows a sample config file for this.

Ceph Storage Network:

Solution No.1:
* Switch #1: LAG with 2x10GB NICs (Switch #2 is also configured for this)
* Switch #2: 1x1GB NIC (active-backup bonding with LAG from Switch #1,
for a short time solution)
* When Switch #1 fails, I will manually change the LAG with 2x10GB NICs to Switch #2

Solution No.2:
* Switch #1: 1x10GB NIC (active-backup bonding)
* Switch #2: 1x10GB NIC (active-backup bonding)

I think Solution No.1 will be better for latency, Solution No.2 better for redundancy.

I would not do Solution1 because if the Ceph network is falling back to 1GB your Ceph cluster will most likely be unusable pretty fast.
The claim of better latency in solution 1 is something I am not sure about. More Bandwidth yes, but the latency doesn't get better with double 10GBit. It would get better with 100GBit.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!