Hi,
Ceph docs suggest single cluster network is sufficient.
https://docs.ceph.com/en/squid/rados/configuration/network-config-ref/
Whilst pve suggests separating the two - which makes sense if in doubt. And I'm assuming is suggested because of situations where hardware capability/nic capability varies. E.g. the public and cluster might not be able to go on equally capable interfaces.
https://pve.proxmox.com/pve-docs/chapter-pveceph.html
I just wanted to double check this makes sense for my setup:
3x Dell servers with 4x1.92TB SSD for Ceph, 4-port 25GbE, 2-port-1GbE:
-separate dedicated corrsoync ring0 on 1GbE copper OOB switch
-2x25GbE ports dedicated to Ceph* private - dedicated network on 100GbE switch with breakouts
-ports 3 and 4 on these 4-port server NICs will be used for other networks required for VM access (not Ceph).
-corrosync ring1 on either port3 or 4, TBC
*these two will be LACP bonded and carry a single Ceph network
Just trying to establish if this will be the right move - going single Ceph network on the 2x25GbE ports. Because the ports are identical in capability i was thinking no need to separate public/cluster Ceph. Since the corrosync is separated from the Ceph and I believe LACP has minimal latency impact i was leaning to this approach for simplicity.
thanks
Ceph docs suggest single cluster network is sufficient.
https://docs.ceph.com/en/squid/rados/configuration/network-config-ref/
Whilst pve suggests separating the two - which makes sense if in doubt. And I'm assuming is suggested because of situations where hardware capability/nic capability varies. E.g. the public and cluster might not be able to go on equally capable interfaces.
https://pve.proxmox.com/pve-docs/chapter-pveceph.html
I just wanted to double check this makes sense for my setup:
3x Dell servers with 4x1.92TB SSD for Ceph, 4-port 25GbE, 2-port-1GbE:
-separate dedicated corrsoync ring0 on 1GbE copper OOB switch
-2x25GbE ports dedicated to Ceph* private - dedicated network on 100GbE switch with breakouts
-ports 3 and 4 on these 4-port server NICs will be used for other networks required for VM access (not Ceph).
-corrosync ring1 on either port3 or 4, TBC
*these two will be LACP bonded and carry a single Ceph network
Just trying to establish if this will be the right move - going single Ceph network on the 2x25GbE ports. Because the ports are identical in capability i was thinking no need to separate public/cluster Ceph. Since the corrosync is separated from the Ceph and I believe LACP has minimal latency impact i was leaning to this approach for simplicity.
thanks