What is the limiting factor(s) when performing storage migration on a running VM?
When performing storage migration (NFS => NFS backends), we see a 750Mbit/s average transfer speed when doing storage migration on a VM that is switched off. On a running VM we see 300Mbit/s average on the same configuration. The best I have ever been able to achieve on a live storage migration is 450Mbit/s but I can't repeat that, and it wasn't sustained anyway. I have tried this on PVE v3.0, 3.1, and now 3.3 with very little variation in the results. There is negligible other storage network activity for the whole time of the migration (~20Mbit/s), so it's not interference from another network device.
Since I have verified that this is not a hardware limitation, I would like to know what it is about a running VM that is the bottleneck so I can tune the speed of live storage migration.
When performing storage migration (NFS => NFS backends), we see a 750Mbit/s average transfer speed when doing storage migration on a VM that is switched off. On a running VM we see 300Mbit/s average on the same configuration. The best I have ever been able to achieve on a live storage migration is 450Mbit/s but I can't repeat that, and it wasn't sustained anyway. I have tried this on PVE v3.0, 3.1, and now 3.3 with very little variation in the results. There is negligible other storage network activity for the whole time of the migration (~20Mbit/s), so it's not interference from another network device.
Since I have verified that this is not a hardware limitation, I would like to know what it is about a running VM that is the bottleneck so I can tune the speed of live storage migration.