VM Migration Speed 100 MiB/s

insightful

New Member
Sep 7, 2023
10
0
1
Hi,
We have assigned 2 Network Interface 1G and 10 G to Proxmox Host but still my Migration speed is very slow.
I am using Virtual Environment 8.0.4
Please refer the sceenshot attached and help me where I am wrong.

Thanks.

1694284965853.png
1694285121799.png1694285271110.png
1694285308854.png

1694285323855.png
 

Attachments

  • 1694285265806.png
    1694285265806.png
    11.1 KB · Views: 14
Hey guys,

got the same problem as @insightful within my 10G Cluster-Network.
I have performed a migration via CLI:

Code:
qm migrate 105 pve2 -migration_network 172.17.6.0/24 -online -migration_type insecure -force -with-local-disks
...
Code:
...
drive-scsi0: transferred 64.0 GiB of 64.0 GiB (100.00%) in 7m 50s
drive-scsi0: transferred 64.0 GiB of 64.0 GiB (100.00%) in 7m 51s, ready
all 'mirror' jobs are ready
2023-11-05 13:47:12 starting online/live migration on tcp:172.17.6.12:60000
2023-11-05 13:47:12 set migration capabilities
2023-11-05 13:47:13 migration downtime limit: 100 ms
2023-11-05 13:47:13 migration cachesize: 512.0 MiB
2023-11-05 13:47:13 set migration parameters
2023-11-05 13:47:13 start migrate command to tcp:172.17.6.12:60000
2023-11-05 13:47:14 migration active, transferred 898.7 MiB of 4.0 GiB VM-state, 992.9 MiB/s
2023-11-05 13:47:15 migration active, transferred 1.9 GiB of 4.0 GiB VM-state, 5.7 GiB/s
2023-11-05 13:47:18 average migration speed: 823.3 MiB/s - downtime 3137 ms
2023-11-05 13:47:18 migration status: completed
all 'mirror' jobs are ready
drive-efidisk0: Completing block job_id...
drive-efidisk0: Completed successfully.
drive-scsi0: Completing block job_id...
drive-scsi0: Completed successfully.
drive-efidisk0: mirror-job finished
drive-scsi0: mirror-job finished
2023-11-05 13:47:19 stopping NBD storage migration server on target.
2023-11-05 14:04:10 migration finished successfully (duration 00:24:57)

wow: duration 00:24:57

BUT
(actually I know the problem)
while an migration from pve2 to pve3 with the same CLI command:

Code:
qm migrate 105 pve3 -migration_network 172.17.6.0/24 -online -migration_type insecure -force -with-local-disks

Code:
...
drive-scsi0: transferred 64.0 GiB of 64.0 GiB (100.00%) in 1m 32s, ready
all 'mirror' jobs are ready
2023-11-05 14:08:07 efidisk0: start migration to nbd:172.17.6.13:60001:exportname=drive-efidisk0
drive mirror is starting for drive-efidisk0
drive-efidisk0: transferred 528.0 KiB of 528.0 KiB (100.00%) in 0s
drive-efidisk0: transferred 528.0 KiB of 528.0 KiB (100.00%) in 1s, ready
all 'mirror' jobs are ready
2023-11-05 14:08:08 starting online/live migration on tcp:172.17.6.13:60000
2023-11-05 14:08:08 set migration capabilities
2023-11-05 14:08:08 migration downtime limit: 100 ms
2023-11-05 14:08:08 migration cachesize: 512.0 MiB
2023-11-05 14:08:08 set migration parameters
2023-11-05 14:08:08 start migrate command to tcp:172.17.6.13:60000
2023-11-05 14:08:09 migration active, transferred 1.0 GiB of 4.0 GiB VM-state, 1.1 GiB/s
2023-11-05 14:08:10 migration active, transferred 2.1 GiB of 4.0 GiB VM-state, 1.2 GiB/s
2023-11-05 14:08:11 average migration speed: 1.3 GiB/s - downtime 197 ms
2023-11-05 14:08:11 migration status: completed
all 'mirror' jobs are ready
drive-efidisk0: Completing block job_id...
drive-efidisk0: Completed successfully.
drive-scsi0: Completing block job_id...
drive-scsi0: Completed successfully.
drive-efidisk0: mirror-job finished
drive-scsi0: mirror-job finished
2023-11-05 14:08:12 stopping NBD storage migration server on target.
2023-11-05 14:08:16 migration finished successfully (duration 00:01:46)

The first host pve1 uses HDD and SSD, and pve2 and 3 uses NVMe... I hadn't thought about it at first.
Just a field report.
Cheers
ako