I recently upgraded my cluster from PVE8 to PVE9 and noticed that offline and online disk migration (qcow2) between storage backends (NFS) has become SLOW. After a few experiments, I observed that the speed is constant and, surprisingly, matches the Read and Write limits set in the settings of the disk being migrated. On PVE8, on my 10Gb network, I had a decent migration speed (close to saturating the link), but now it was 100 MB/s, which exactly matched the limits configured for that disk. After removing the limits, the speed became what it should be. If I set limits to 30 MB/s, I get a migration speed of 30 MB/s. I didn’t check the impact of the IOPS limits or the burst limits for both limit types.
Since I have a lot of VMs in the cluster (it’s a work cluster), it’s a highly competitive virtualization environment in terms of IOPS and bandwidth, so most VMs have some kind of limits configured.
I can say with complete certainty that neither PVE7 nor PVE8 showed this behavior. A very unexpected new "feature". What did I miss in the changelog?
Since I have a lot of VMs in the cluster (it’s a work cluster), it’s a highly competitive virtualization environment in terms of IOPS and bandwidth, so most VMs have some kind of limits configured.
I can say with complete certainty that neither PVE7 nor PVE8 showed this behavior. A very unexpected new "feature". What did I miss in the changelog?
Last edited: