We currently have a 2-Node PVE 3.4 Cluster with DRBD8 on Hetzner Root-Servers and have been very happy with this for several years (over different versions).
With the coming end of life for proxmox 3.4 and the problems relating DRBD9 we are looking into switching to a 3-Node Ceph-Server HA-Cluster (OSDs and VMs on the same hosts, like described in proxmox wiki: https://pve.proxmox.com/wiki/Ceph_Server )
My issue with this is trying to keep the budget from multiplying.
Has anyone tried to use a single 10gb Network for all internal cluster communication, one NIC per node, connected via a dedicated 10gb Switch? The nodes have a separate 1GB NIC for uplink.
This could be separated on the switch into three (or more) tagged VLANs for ceph-private, corosync and other (internal) cluster-communication and QoS / limiters.
Proxmox support told us that a physical separation of ceph and corosync is highly recommended, not only due to bandwith but also latency issues, but that VLANs with QoS / Limiters on the Switch might alleviate the issue.
Does anyone have experience with a similar setup?
With the coming end of life for proxmox 3.4 and the problems relating DRBD9 we are looking into switching to a 3-Node Ceph-Server HA-Cluster (OSDs and VMs on the same hosts, like described in proxmox wiki: https://pve.proxmox.com/wiki/Ceph_Server )
My issue with this is trying to keep the budget from multiplying.
Has anyone tried to use a single 10gb Network for all internal cluster communication, one NIC per node, connected via a dedicated 10gb Switch? The nodes have a separate 1GB NIC for uplink.
This could be separated on the switch into three (or more) tagged VLANs for ceph-private, corosync and other (internal) cluster-communication and QoS / limiters.
Proxmox support told us that a physical separation of ceph and corosync is highly recommended, not only due to bandwith but also latency issues, but that VLANs with QoS / Limiters on the Switch might alleviate the issue.
Does anyone have experience with a similar setup?