Hi,
We have a three node setup, each of which is running Proxmox & Ceph.
Each node has a four-port 10GBit link. Two ports for Ceph and two ports for public/private networking.
There are also two 1GBit ports used purely for Corosync.
Link aggregation for Ceph works great, Ceph is nice and happy and has a green status.
However, we had an issue with our public/private LAG on node 1 which dropped the transfer speeds to MBit/s rather than GBit/s. One thing we noticed was excessive IOWait times inside our VMs, to the point that when backups were performed, the load for the VM was so high that it became unresponsive.
Our monitors, managers & metadata services all speak over public/private networks but our OSDs sync via a dedicated network.
Is this configuration OK? Why would I be getting the iowait issues?
Thanks,
Chris.
We have a three node setup, each of which is running Proxmox & Ceph.
Each node has a four-port 10GBit link. Two ports for Ceph and two ports for public/private networking.
There are also two 1GBit ports used purely for Corosync.
Link aggregation for Ceph works great, Ceph is nice and happy and has a green status.
However, we had an issue with our public/private LAG on node 1 which dropped the transfer speeds to MBit/s rather than GBit/s. One thing we noticed was excessive IOWait times inside our VMs, to the point that when backups were performed, the load for the VM was so high that it became unresponsive.
Our monitors, managers & metadata services all speak over public/private networks but our OSDs sync via a dedicated network.
Is this configuration OK? Why would I be getting the iowait issues?
Thanks,
Chris.