Hello,
I'm a total beginner to Proxmox/Ceph, I have the following hardware available
(no more changes are possible):
6 x Supermicro Server: 2x10GB NICs, 4x1GB NICs, 2x256GB NVMEs, 2x2TB NVMEs, 256GB RAM
2 x Switch: Netgear ProSAFE XS728T
Here my ideas:
XS728T is not a stacking switch, so I can't use MLAG. I see have only the choice
between LAG at one switch or active-backup bonding with two switches,
because no other bonding mode is possible (why ?). I plan to have 3 separated networks
(3 VLANs): Corosync Cluster Network, Ceph Storage Network, Proxmox Client Network
Proxmox Client Network:
* External Switch #3 (outside of our responsibility): LAG with 2x1GB NICs
Corosync Cluster Network:
* Switch #1: 1x 1GB NIC (passive RRP mode)
* Switch #2: 1x 1GB NIC (passive RRP mode)
Ceph Storage Network:
Solution No.1:
* Switch #1: LAG with 2x10GB NICs (Switch #2 is also configured for this)
* Switch #2: 1x1GB NIC (active-backup bonding with LAG from Switch #1,
for a short time solution)
* When Switch #1 fails, I will manually change the LAG with 2x10GB NICs to Switch #2
Solution No.2:
* Switch #1: 1x10GB NIC (active-backup bonding)
* Switch #2: 1x10GB NIC (active-backup bonding)
I think Solution No.1 will be better for latency, Solution No.2 better for redundancy.
Any suggestions ? Thanks !
Best regards
Stefan
I'm a total beginner to Proxmox/Ceph, I have the following hardware available
(no more changes are possible):
6 x Supermicro Server: 2x10GB NICs, 4x1GB NICs, 2x256GB NVMEs, 2x2TB NVMEs, 256GB RAM
2 x Switch: Netgear ProSAFE XS728T
Here my ideas:
XS728T is not a stacking switch, so I can't use MLAG. I see have only the choice
between LAG at one switch or active-backup bonding with two switches,
because no other bonding mode is possible (why ?). I plan to have 3 separated networks
(3 VLANs): Corosync Cluster Network, Ceph Storage Network, Proxmox Client Network
Proxmox Client Network:
* External Switch #3 (outside of our responsibility): LAG with 2x1GB NICs
Corosync Cluster Network:
* Switch #1: 1x 1GB NIC (passive RRP mode)
* Switch #2: 1x 1GB NIC (passive RRP mode)
Ceph Storage Network:
Solution No.1:
* Switch #1: LAG with 2x10GB NICs (Switch #2 is also configured for this)
* Switch #2: 1x1GB NIC (active-backup bonding with LAG from Switch #1,
for a short time solution)
* When Switch #1 fails, I will manually change the LAG with 2x10GB NICs to Switch #2
Solution No.2:
* Switch #1: 1x10GB NIC (active-backup bonding)
* Switch #2: 1x10GB NIC (active-backup bonding)
I think Solution No.1 will be better for latency, Solution No.2 better for redundancy.
Any suggestions ? Thanks !
Best regards
Stefan