Is it a good setup, 3 questions ...
First we have 9 gigabit NIC per node (3 nodes, cluster pve) , is it good for production ? (OSD are 1TB HDD 7200tr/mn, for all we have 18 OSD : is 256 pg is enough or must we stay on 128 ?)
- 1 cluster corosync, migration, webgui (pve node) : 10.10.20.101-3/24
- 1 cluster ceph, (cluster network) : 10.10.99.0/24
- 4 ceph OSD (LACP) , storage réplication , (public network) : 10.10.90.0/24
- 1 users : admin, infrastructure ; 10.10.100.0/24 (vlan tagged network)
- 1 users : VM, CT (ipv6 only); 201a:eac:5301:: (vlan tagged network)
- 1 backup network : 10.10.200.0/24
Dedicated real switch for :
- ceph,
- backup
- users
- cluster corosync, migration, webgui
Next, which one is prefered for bridge/bond/device : OVSwitch or native Linux (we need LACP and VLAN ) ?
End : is main data traffic for ceph (where must i put my 4 gigabit NIC for LACP ) is passing through "Public network" or "cluster network" ?
First we have 9 gigabit NIC per node (3 nodes, cluster pve) , is it good for production ? (OSD are 1TB HDD 7200tr/mn, for all we have 18 OSD : is 256 pg is enough or must we stay on 128 ?)
- 1 cluster corosync, migration, webgui (pve node) : 10.10.20.101-3/24
- 1 cluster ceph, (cluster network) : 10.10.99.0/24
- 4 ceph OSD (LACP) , storage réplication , (public network) : 10.10.90.0/24
- 1 users : admin, infrastructure ; 10.10.100.0/24 (vlan tagged network)
- 1 users : VM, CT (ipv6 only); 201a:eac:5301:: (vlan tagged network)
- 1 backup network : 10.10.200.0/24
Dedicated real switch for :
- ceph,
- backup
- users
- cluster corosync, migration, webgui
Next, which one is prefered for bridge/bond/device : OVSwitch or native Linux (we need LACP and VLAN ) ?
End : is main data traffic for ceph (where must i put my 4 gigabit NIC for LACP ) is passing through "Public network" or "cluster network" ?