Hello proxmox,
if I start a bulk migration from pve02 to pve01 7 vms migrate without any problem, but one had this error:
10.100.100.231 is the ceph ip of the pve01-node. I started the bulk migrate with multiple jobs. After failing I tried again to only migrate the failed one and it was successful:
Is this a bug or a problem of maybe saturated link? If it is because of link, can this be avoided somehow?
if I start a bulk migration from pve02 to pve01 7 vms migrate without any problem, but one had this error:
Code:
task started by HA resource agent
TASK ERROR: failed to get ip for node 'pve01' in network '10.100.100.231/24'
10.100.100.231 is the ceph ip of the pve01-node. I started the bulk migrate with multiple jobs. After failing I tried again to only migrate the failed one and it was successful:
Code:
task started by HA resource agent
2021-11-22 15:26:51 use dedicated network address for sending migration traffic (10.100.100.231)
2021-11-22 15:26:51 starting migration of VM 108 to node 'pve01' (10.100.100.231)
2021-11-22 15:26:51 starting VM 108 on remote node 'pve01'
2021-11-22 15:26:52 start remote tunnel
2021-11-22 15:26:53 ssh tunnel ver 1
2021-11-22 15:26:53 starting online/live migration on unix:/run/qemu-server/108.migrate
2021-11-22 15:26:53 set migration capabilities
2021-11-22 15:26:53 migration downtime limit: 100 ms
2021-11-22 15:26:53 migration cachesize: 512.0 MiB
2021-11-22 15:26:53 set migration parameters
2021-11-22 15:26:53 start migrate command to unix:/run/qemu-server/108.migrate
2021-11-22 15:26:54 migration active, transferred 544.9 MiB of 3.9 GiB VM-state, 560.8 MiB/s
2021-11-22 15:26:55 migration active, transferred 928.4 MiB of 3.9 GiB VM-state, 456.8 MiB/s
2021-11-22 15:26:56 migration active, transferred 1.3 GiB of 3.9 GiB VM-state, 411.4 MiB/s
2021-11-22 15:26:57 migration active, transferred 1.7 GiB of 3.9 GiB VM-state, 368.8 MiB/s
2021-11-22 15:26:58 migration active, transferred 2.0 GiB of 3.9 GiB VM-state, 350.0 MiB/s
2021-11-22 15:26:59 migration active, transferred 2.6 GiB of 3.9 GiB VM-state, 339.4 MiB/s
2021-11-22 15:27:00 average migration speed: 573.8 MiB/s - downtime 50 ms
2021-11-22 15:27:00 migration status: completed
2021-11-22 15:27:02 migration finished successfully (duration 00:00:11)
TASK OK
Is this a bug or a problem of maybe saturated link? If it is because of link, can this be avoided somehow?
Last edited: