It is a good network setup : 9 nic (gigabit) ?

atec666

Member
Mar 8, 2019
136
4
18
Issoire
Is it a good setup, 3 questions ...

First we have 9 gigabit NIC per node (3 nodes, cluster pve) , is it good for production ? (OSD are 1TB HDD 7200tr/mn, for all we have 18 OSD : is 256 pg is enough or must we stay on 128 ?)

- 1 cluster corosync, migration, webgui (pve node) : 10.10.20.101-3/24

- 1 cluster ceph, (cluster network) : 10.10.99.0/24
- 4 ceph OSD (LACP) , storage réplication , (public network) : 10.10.90.0/24
- 1 users : admin, infrastructure ; 10.10.100.0/24 (vlan tagged network)
- 1 users : VM, CT (ipv6 only); 201a:eac:5301:: (vlan tagged network)

- 1 backup network : 10.10.200.0/24

Dedicated real switch for :
- ceph,
- backup
- users
- cluster corosync, migration, webgui

Next, which one is prefered for bridge/bond/device : OVSwitch or native Linux (we need LACP and VLAN ) ?

End : is main data traffic for ceph (where must i put my 4 gigabit NIC for LACP ) is passing through "Public network" or "cluster network" ?
 
A few comments from my side:

Don't put Corosync on the same network with any other service that has the potential to clog it up. Migration traffic can will definitely use up the full bandwidth which will cause Corosync problems.

Corosync can be configured to use more than one network and will switch them automatically if the higher prioritized network becomes unusable.

If you want to use Ceph, use at least 10GBit or faster networks. I don't think you will enjoy Ceph with only 1GBit NICs.
Regarding pg_num size: The official documentation is very exhaustive [0]. TL;DR: having too many PGs is not really a bad thing, but having too little is definitely problematic.

If you can achieve what you need with the native Linux bridge, use it. Less dependencies and issues that you can run into. LACP and VLAN should work with it AFAIR.


[0] https://docs.ceph.com/docs/nautilus/rados/operations/placement-groups/
 
A few comments from my side:

Don't put Corosync on the same network with any other service that has the potential to clog it up. Migration traffic can will definitely use up the full bandwidth which will cause Corosync problems.

Corosync can be configured to use more than one network and will switch them automatically if the higher prioritized network becomes unusable.

If you want to use Ceph, use at least 10GBit or faster networks. I don't think you will enjoy Ceph with only 1GBit NICs.
Regarding pg_num size: The official documentation is very exhaustive [0]. TL;DR: having too many PGs is not really a bad thing, but having too little is definitely problematic.

If you can achieve what you need with the native Linux bridge, use it. Less dependencies and issues that you can run into. LACP and VLAN should work with it AFAIR.


[0] https://docs.ceph.com/docs/nautilus/rados/operations/placement-groups/
hello aaron,

thank you for replying.
ok for native linux switch , and go for 128PG (default infact)

you read wrong : ceph is on top of LACP (bond) 4 Gbit not 4Gb ( 4 gbit give me 4 x 125MB/s ... 500MB/s for HDD , i think it's enough !!) , and i don't have money for 10Gb switch nor 10Gb nic . (sorry, we are a little LUG , for small town in France ;-))

so :

- 1 cluster corosync, migration: 10.10.100.0/24
- 1 cluster ceph, (cluster network) : 10.10.99.0/24
- 4 x nic [4Gbit/s bandwidth ;-)] ceph OSD (LACP) , storage réplication , (public network) : 10.10.90.0/24
- 1 users : admin, infrastructure , webgui (pve node) ; 10.10.20.0/24 (vlan tagged network)
- 1 users : VM, CT (ipv6 only); 201a:eac:5301:: (vlan tagged network)

- 1 backup network : 10.10.200.0/24

and :

Difference between Ceph "cluster net" and ceph "public net" ? (i put ceph and pve with vm ct on the same hardware x 3)
"main data traffic for ceph (where must i put my 4 gigabit NIC for LACP ) is passing through "Public network" or "cluster network" ?"
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!