Hi,
I have a small homelab PVE cluster of three nodes (which also all run CEPH). There are separate networks for each Corosync and CEPH.
Recently, I upgraded the network from 1GbE to 10GbE. Tests with iperf show that the networks now transfer close to 10 gbit/s (around 9.2).
However, when I migrate a VM from one node to another, the GUI output shows that the average migration speed is, e.g. 121.5 MiB/s. To me, that equates to roughly 1 gbit/s which, for 1 GbE network would be exceptionally fast (considering that there is overhead) but for a 10GbE networks seems pretty slow. I have no record of the migration speed with the old 1 GbE network but I believe it was around 90 MiB/s.
Considering that this, I think, is a transfer memory to memory (not disk to disk), how come this is not (much) faster?
Is this normal, can I do anything to improve the speed?
Thanks!
I have a small homelab PVE cluster of three nodes (which also all run CEPH). There are separate networks for each Corosync and CEPH.
Recently, I upgraded the network from 1GbE to 10GbE. Tests with iperf show that the networks now transfer close to 10 gbit/s (around 9.2).
However, when I migrate a VM from one node to another, the GUI output shows that the average migration speed is, e.g. 121.5 MiB/s. To me, that equates to roughly 1 gbit/s which, for 1 GbE network would be exceptionally fast (considering that there is overhead) but for a 10GbE networks seems pretty slow. I have no record of the migration speed with the old 1 GbE network but I believe it was around 90 MiB/s.
Considering that this, I think, is a transfer memory to memory (not disk to disk), how come this is not (much) faster?
Is this normal, can I do anything to improve the speed?
Thanks!