Migration never end

Filin Filinov

Member
Aug 13, 2021
11
0
6
41
Hi. I have a Proxmox cluster of 3 nodes with local storages.
I am testing a live migration VM migration with CentOS 7 and PostgreSQL installed running performance tests to simulate workload.

When trying to live migration between nodes, the migration never ends - the volume of transferred data begins to grow at 96% and the migration percentages are constantly around 96-98%. I tried to run the test for 8 hours - the migration did not end until the load was removed.

I've tried limiting the read and write performance of the VM to disk. The first attempt at migration without VM limits showed 300 Mb / s for reading and 100 Mb / s for writing on average. The second attempt with a limitation, the VM shows 25 Mb / s for reading and 13 Mb / s for writing on average. At the same time, the situation with% has not changed in any way.

I tried disabling SSH data transfer (migration: type = insecure) - it also didn't help. There is no other load on the cluster nodes.
Any ideas?


Package versions
Code:
proxmox-ve: 7.1-1 (running kernel: 5.13.19-1-pve)
pve-manager: 7.1-5 (running version: 7.1-5/6fe299a0)
pve-kernel-5.13: 7.1-4
pve-kernel-helper: 7.1-4
pve-kernel-5.11: 7.0-10
pve-kernel-5.13.19-1-pve: 5.13.19-2
pve-kernel-5.11.22-7-pve: 5.11.22-12
pve-kernel-5.11.22-5-pve: 5.11.22-10
pve-kernel-5.11.22-1-pve: 5.11.22-2
ceph-fuse: 15.2.13-pve1
corosync: 3.1.5-pve2
criu: 3.15-1+pve-1
glusterfs-client: 9.2-1
ifupdown2: 3.1.0-1+pmx3
ksm-control-daemon: 1.4-1
libjs-extjs: 7.0.0-1
libknet1: 1.22-pve2
libproxmox-acme-perl: 1.4.0
libproxmox-backup-qemu0: 1.2.0-1
libpve-access-control: 7.1-2
libpve-apiclient-perl: 3.2-1
libpve-common-perl: 7.0-14
libpve-guest-common-perl: 4.0-3
libpve-http-server-perl: 4.0-3
libpve-storage-perl: 7.0-15
libspice-server1: 0.14.3-2.1
lvm2: 2.03.11-2.1
lxc-pve: 4.0.9-4
lxcfs: 4.0.8-pve2
novnc-pve: 1.2.0-3
proxmox-backup-client: 2.0.14-1
proxmox-backup-file-restore: 2.0.14-1
proxmox-mini-journalreader: 1.2-1
proxmox-widget-toolkit: 3.4-2
pve-cluster: 7.1-2
pve-container: 4.1-2
pve-docs: 7.1-2
pve-edk2-firmware: 3.20210831-2
pve-firewall: 4.2-5
pve-firmware: 3.3-3
pve-ha-manager: 3.3-1
pve-i18n: 2.6-1
pve-qemu-kvm: 6.1.0-2
pve-xtermjs: 4.12.0-1
qemu-server: 7.1-3
smartmontools: 7.2-1
spiceterm: 3.2-2
swtpm: 0.7.0~rc1+2
vncterm: 1.7-1
zfsutils-linux: 2.1.1-pve3
 
Hi,

How much memory do you have in that VMs?
Could you please post the task log for the migration process and the VM config?
Is there a lot of memory change during the migration process?
 
VM Config:
Code:
agent: 1
balloon: 0
boot: order=scsi0;ide2;net0
cores: 8
ide2: none,media=cdrom
memory: 16384
name: pvetest-pg1
net0: virtio=C6:38:B4:FA:02:5D,bridge=vmbr1,tag=1111
numa: 1
onboot: 1
ostype: l26
scsi0: local-lvm:vm-1002-disk-0,cache=directsync,discard=on,format=raw,mbps_rd=50,mbps_wr=30,size=200G,ssd=1
scsihw: virtio-scsi-pci
smbios1: uuid=11e1cdc9-6027-4863-ad25-aa770b7fefdc
sockets: 1
vmgenid: 779af68d-dd6d-4aef-a978-2f18e922dce8


Task log (cuted)

Code:
2021-12-02 15:42:27 starting migration of VM 1002 to node 'proxmox1' (172.18.xx.xx)
  Command failed with status code 5.
command '/sbin/vgscan --ignorelockingfailure --mknodes' failed: exit code 5
2021-12-02 15:42:28 found local disk 'local-lvm:vm-1002-disk-0' (in current VM config)
2021-12-02 15:42:28 starting VM 1002 on remote node 'proxmox1'
2021-12-02 15:42:30 volume 'local-lvm:vm-1002-disk-0' is 'local_lvm_ssd_raid10_data1:vm-1002-disk-0' on the target
2021-12-02 15:42:30 start remote tunnel
2021-12-02 15:42:31 ssh tunnel ver 1
2021-12-02 15:42:31 starting storage migration
2021-12-02 15:42:31 scsi0: start migration to nbd:172.18.157.2:60001:exportname=drive-scsi0
drive mirror is starting for drive-scsi0
drive-scsi0: transferred 54.0 MiB of 200.0 GiB (0.03%) in 3m 47s
drive-scsi0: transferred 148.0 MiB of 200.0 GiB (0.07%) in 3m 48s
drive-scsi0: transferred 228.0 MiB of 200.0 GiB (0.11%) in 3m 49s
drive-scsi0: transferred 316.0 MiB of 200.0 GiB (0.15%) in 3m 50s
drive-scsi0: transferred 397.0 MiB of 200.0 GiB (0.19%) in 3m 51s
drive-scsi0: transferred 492.0 MiB of 200.0 GiB (0.24%) in 3m 52s
drive-scsi0: transferred 592.0 MiB of 200.0 GiB (0.29%) in 3m 53s
drive-scsi0: transferred 699.0 MiB of 200.0 GiB (0.34%) in 3m 54s
drive-scsi0: transferred 767.0 MiB of 200.0 GiB (0.37%) in 3m 55s
drive-scsi0: transferred 859.0 MiB of 200.0 GiB (0.42%) in 3m 56s
drive-scsi0: transferred 963.0 MiB of 200.0 GiB (0.47%) in 3m 57s
drive-scsi0: transferred 1.0 GiB of 200.0 GiB (0.50%) in 3m 58s
drive-scsi0: transferred 1.1 GiB of 200.0 GiB (0.54%) in 3m 59s


drive-scsi0: transferred 255.3 GiB of 259.3 GiB (98.45%) in 1h 16m 39s
drive-scsi0: transferred 255.4 GiB of 259.4 GiB (98.46%) in 1h 16m 40s
drive-scsi0: transferred 255.4 GiB of 259.4 GiB (98.46%) in 1h 16m 41s
drive-scsi0: transferred 255.4 GiB of 259.4 GiB (98.47%) in 1h 16m 42s
drive-scsi0: transferred 255.4 GiB of 259.4 GiB (98.48%) in 1h 16m 43s
drive-scsi0: transferred 255.5 GiB of 259.4 GiB (98.48%) in 1h 16m 44s
drive-scsi0: transferred 255.5 GiB of 259.4 GiB (98.49%) in 1h 16m 45s
drive-scsi0: transferred 255.5 GiB of 259.4 GiB (98.50%) in 1h 16m 46s
drive-scsi0: transferred 255.5 GiB of 259.4 GiB (98.50%) in 1h 16m 47s
drive-scsi0: transferred 255.6 GiB of 259.5 GiB (98.50%) in 1h 16m 48s
drive-scsi0: transferred 255.6 GiB of 259.5 GiB (98.51%) in 1h 16m 49s
drive-scsi0: transferred 255.6 GiB of 259.5 GiB (98.49%) in 1h 16m 50s
drive-scsi0: transferred 255.6 GiB of 259.6 GiB (98.49%) in 1h 16m 51s
drive-scsi0: transferred 255.6 GiB of 259.6 GiB (98.49%) in 1h 16m 52s
drive-scsi0: transferred 255.7 GiB of 259.6 GiB (98.49%) in 1h 16m 53s
drive-scsi0: transferred 255.7 GiB of 259.6 GiB (98.48%) in 1h 16m 54s
drive-scsi0: transferred 255.7 GiB of 259.7 GiB (98.47%) in 1h 16m 55s
drive-scsi0: transferred 255.7 GiB of 259.8 GiB (98.45%) in 1h 16m 56s
drive-scsi0: transferred 255.7 GiB of 259.8 GiB (98.44%) in 1h 16m 57s
drive-scsi0: transferred 255.8 GiB of 259.8 GiB (98.44%) in 1h 16m 58s
drive-scsi0: transferred 255.8 GiB of 259.8 GiB (98.45%) in 1h 16m 59s
drive-scsi0: transferred 255.8 GiB of 259.9 GiB (98.45%) in 1h 17m
drive-scsi0: transferred 255.8 GiB of 259.9 GiB (98.44%) in 1h 17m 1s
drive-scsi0: transferred 255.9 GiB of 259.9 GiB (98.44%) in 1h 17m 2s
drive-scsi0: transferred 255.9 GiB of 260.0 GiB (98.43%) in 1h 17m 3s
drive-scsi0: transferred 255.9 GiB of 260.0 GiB (98.43%) in 1h 17m 4s
drive-scsi0: transferred 255.9 GiB of 260.0 GiB (98.43%) in 1h 17m 5s
drive-scsi0: transferred 256.0 GiB of 260.0 GiB (98.43%) in 1h 17m 6s




drive-scsi0: transferred 433.8 GiB of 440.1 GiB (98.58%) in 2h 54m 16s
drive-scsi0: transferred 433.9 GiB of 440.1 GiB (98.58%) in 2h 54m 17s
drive-scsi0: transferred 433.9 GiB of 440.1 GiB (98.58%) in 2h 54m 18s
drive-scsi0: transferred 433.9 GiB of 440.2 GiB (98.58%) in 2h 54m 19s
drive-scsi0: transferred 434.0 GiB of 440.2 GiB (98.59%) in 2h 54m 20s
drive-scsi0: transferred 434.0 GiB of 440.2 GiB (98.59%) in 2h 54m 21s
drive-scsi0: transferred 434.0 GiB of 440.2 GiB (98.59%) in 2h 54m 22s
drive-scsi0: transferred 434.0 GiB of 440.4 GiB (98.56%) in 2h 54m 24s
drive-scsi0: transferred 434.0 GiB of 440.5 GiB (98.53%) in 2h 54m 25s
drive-scsi0: transferred 434.1 GiB of 440.6 GiB (98.51%) in 2h 54m 26s
drive-scsi0: transferred 434.1 GiB of 440.6 GiB (98.51%) in 2h 54m 27s
drive-scsi0: transferred 434.1 GiB of 440.6 GiB (98.51%) in 2h 54m 28s
drive-scsi0: transferred 434.1 GiB of 440.9 GiB (98.45%) in 2h 54m 29s
drive-scsi0: transferred 434.1 GiB of 441.0 GiB (98.44%) in 2h 54m 30s
drive-scsi0: transferred 434.1 GiB of 441.1 GiB (98.41%) in 2h 54m 31s
drive-scsi0: transferred 434.1 GiB of 441.2 GiB (98.39%) in 2h 54m 32s
drive-scsi0: transferred 434.2 GiB of 441.3 GiB (98.38%) in 2h 54m 33s
drive-scsi0: transferred 434.2 GiB of 441.3 GiB (98.37%) in 2h 54m 34s
drive-scsi0: transferred 434.2 GiB of 441.5 GiB (98.35%) in 2h 54m 35s
drive-scsi0: transferred 434.2 GiB of 441.6 GiB (98.32%) in 2h 54m 36s
drive-scsi0: transferred 434.2 GiB of 441.7 GiB (98.31%) in 2h 54m 37s
drive-scsi0: transferred 434.2 GiB of 441.8 GiB (98.29%) in 2h 54m 38s
drive-scsi0: transferred 434.2 GiB of 441.9 GiB (98.26%) in 2h 54m 39s
drive-scsi0: transferred 434.3 GiB of 442.0 GiB (98.25%) in 2h 54m 40s
drive-scsi0: transferred 434.3 GiB of 442.0 GiB (98.25%) in 2h 54m 41s
drive-scsi0: transferred 434.3 GiB of 442.2 GiB (98.20%) in 2h 54m 42s
drive-scsi0: transferred 434.3 GiB of 442.3 GiB (98.19%) in 2h 54m 43s
drive-scsi0: transferred 434.3 GiB of 442.4 GiB (98.17%) in 2h 54m 44s
drive-scsi0: transferred 434.3 GiB of 442.4 GiB (98.16%) in 2h 54m 45s
drive-scsi0: transferred 434.3 GiB of 442.6 GiB (98.14%) in 2h 54m 46s
drive-scsi0: transferred 434.3 GiB of 442.7 GiB (98.12%) in 2h 54m 47s
drive-scsi0: transferred 434.4 GiB of 442.7 GiB (98.11%) in 2h 54m 48s
drive-scsi0: transferred 434.4 GiB of 442.7 GiB (98.11%) in 2h 54m 49s
drive-scsi0: transferred 434.4 GiB of 442.9 GiB (98.07%) in 2h 54m 50s
drive-scsi0: transferred 434.4 GiB of 442.9 GiB (98.07%) in 2h 54m 51s
drive-scsi0: transferred 434.4 GiB of 443.0 GiB (98.07%) in 2h 54m 52s
drive-scsi0: transferred 434.4 GiB of 443.0 GiB (98.06%) in 2h 54m 53s
drive-scsi0: transferred 434.4 GiB of 443.1 GiB (98.05%) in 2h 54m 54s
drive-scsi0: transferred 434.4 GiB of 443.2 GiB (98.03%) in 2h 54m 55s
drive-scsi0: transferred 434.4 GiB of 443.2 GiB (98.03%) in 2h 54m 56s
drive-scsi0: transferred 434.5 GiB of 443.3 GiB (98.01%) in 2h 54m 57s
drive-scsi0: transferred 434.5 GiB of 443.3 GiB (98.01%) in 2h 54m 58s
drive-scsi0: transferred 434.5 GiB of 443.4 GiB (98.00%) in 2h 54m 59s
drive-scsi0: transferred 434.5 GiB of 443.4 GiB (97.99%) in 2h 55m
drive-scsi0: transferred 434.6 GiB of 443.5 GiB (97.99%) in 2h 55m 1s
drive-scsi0: transferred 434.6 GiB of 443.5 GiB (97.99%) in 2h 55m 2s
drive-scsi0: transferred 434.6 GiB of 443.5 GiB (97.99%) in 2h 55m 3s
drive-scsi0: transferred 434.6 GiB of 443.5 GiB (98.00%) in 2h 55m 4s
drive-scsi0: transferred 434.7 GiB of 443.6 GiB (97.98%) in 2h 55m 5s
drive-scsi0: transferred 434.7 GiB of 443.6 GiB (97.98%) in 2h 55m 6s
drive-scsi0: transferred 434.7 GiB of 443.7 GiB (97.98%) in 2h 55m 7s
drive-scsi0: transferred 434.7 GiB of 443.7 GiB (97.97%) in 2h 55m 8s
drive-scsi0: transferred 434.7 GiB of 443.7 GiB (97.97%) in 2h 55m 9s
drive-scsi0: transferred 434.8 GiB of 443.8 GiB (97.97%) in 2h 55m 10s
drive-scsi0: transferred 434.8 GiB of 443.8 GiB (97.97%) in 2h 55m 11s
drive-scsi0: transferred 434.9 GiB of 443.8 GiB (97.98%) in 2h 55m 12s
drive-scsi0: transferred 434.9 GiB of 443.9 GiB (97.97%) in 2h 55m 13s
drive-scsi0: transferred 435.0 GiB of 444.0 GiB (97.98%) in 2h 55m 14s
drive-scsi0: transferred 435.1 GiB of 444.0 GiB (97.98%) in 2h 55m 15s
drive-scsi0: transferred 435.1 GiB of 444.0 GiB (97.99%) in 2h 55m 16s
drive-scsi0: transferred 435.2 GiB of 444.1 GiB (98.00%) in 2h 55m 17s
drive-scsi0: transferred 435.3 GiB of 444.1 GiB (98.01%) in 2h 55m 18s
drive-scsi0: transferred 435.3 GiB of 444.1 GiB (98.02%) in 2h 55m 19s
drive-scsi0: transferred 435.4 GiB of 444.2 GiB (98.03%) in 2h 55m 20s
drive-scsi0: transferred 435.5 GiB of 444.2 GiB (98.04%) in 2h 55m 21s
drive-scsi0: transferred 435.6 GiB of 444.2 GiB (98.05%) in 2h 55m 22s
drive-scsi0: transferred 435.6 GiB of 444.3 GiB (98.06%) in 2h 55m 23s
drive-scsi0: transferred 435.7 GiB of 444.3 GiB (98.07%) in 2h 55m 24s
drive-scsi0: transferred 435.8 GiB of 444.3 GiB (98.08%) in 2h 55m 25s
drive-scsi0: transferred 435.9 GiB of 444.4 GiB (98.08%) in 2h 55m 26s
drive-scsi0: transferred 435.9 GiB of 444.4 GiB (98.09%) in 2h 55m 27s
drive-scsi0: transferred 436.0 GiB of 444.4 GiB (98.10%) in 2h 55m 28s
drive-scsi0: transferred 436.1 GiB of 444.5 GiB (98.11%) in 2h 55m 29s
drive-scsi0: transferred 436.1 GiB of 444.5 GiB (98.12%) in 2h 55m 30s
drive-scsi0: transferred 436.2 GiB of 444.5 GiB (98.13%) in 2h 55m 31s
drive-scsi0: transferred 436.3 GiB of 444.6 GiB (98.14%) in 2h 55m 32s
drive-scsi0: transferred 436.3 GiB of 444.6 GiB (98.14%) in 2h 55m 33s
drive-scsi0: transferred 436.4 GiB of 444.6 GiB (98.15%) in 2h 55m 34s
drive-scsi0: transferred 436.5 GiB of 444.7 GiB (98.16%) in 2h 55m 35s
drive-scsi0: transferred 436.6 GiB of 444.7 GiB (98.17%) in 2h 55m 36s
drive-scsi0: transferred 436.6 GiB of 444.7 GiB (98.18%) in 2h 55m 37s
drive-scsi0: transferred 436.7 GiB of 444.7 GiB (98.19%) in 2h 55m 38s
drive-scsi0: transferred 436.8 GiB of 444.8 GiB (98.20%) in 2h 55m 39s
drive-scsi0: transferred 436.8 GiB of 444.8 GiB (98.20%) in 2h 55m 40s
drive-scsi0: transferred 436.9 GiB of 444.9 GiB (98.21%) in 2h 55m 41s
drive-scsi0: transferred 436.9 GiB of 444.9 GiB (98.21%) in 2h 55m 42s
drive-scsi0: transferred 437.0 GiB of 444.9 GiB (98.21%) in 2h 55m 43s
drive-scsi0: transferred 437.0 GiB of 444.9 GiB (98.22%) in 2h 55m 44s
drive-scsi0: transferred 437.0 GiB of 444.9 GiB (98.22%) in 2h 55m 45s
drive-scsi0: transferred 437.0 GiB of 445.0 GiB (98.21%) in 2h 55m 46s
drive-scsi0: transferred 437.1 GiB of 445.0 GiB (98.21%) in 2h 55m 47s
drive-scsi0: transferred 437.1 GiB of 445.1 GiB (98.20%) in 2h 55m 48s
drive-scsi0: transferred 437.1 GiB of 445.1 GiB (98.20%) in 2h 55m 49s
drive-scsi0: transferred 437.1 GiB of 445.1 GiB (98.20%) in 2h 55m 50s
drive-scsi0: transferred 437.2 GiB of 445.2 GiB (98.20%) in 2h 55m 51s
drive-scsi0: transferred 437.2 GiB of 445.2 GiB (98.20%) in 2h 55m 52s
drive-scsi0: transferred 437.2 GiB of 445.2 GiB (98.20%) in 2h 55m 53s
drive-scsi0: transferred 437.2 GiB of 445.3 GiB (98.20%) in 2h 55m 54s
drive-scsi0: transferred 437.2 GiB of 445.3 GiB (98.20%) in 2h 55m 55s
drive-scsi0: transferred 437.3 GiB of 445.3 GiB (98.20%) in 2h 55m 56s
drive-scsi0: transferred 437.3 GiB of 445.4 GiB (98.20%) in 2h 55m 57s
drive-scsi0: transferred 437.4 GiB of 445.4 GiB (98.20%) in 2h 55m 58s
drive-scsi0: transferred 437.4 GiB of 445.4 GiB (98.20%) in 2h 55m 59s
drive-scsi0: Cancelling block job
drive-scsi0: Done.
2021-12-02 18:38:32 ERROR: online migrate failure - block job (mirror) error: interrupted by signal
2021-12-02 18:38:32 aborting phase 2 - cleanup resources
2021-12-02 18:38:32 migrate_cancel
2021-12-02 18:38:34 ERROR: writing to tunnel failed: broken pipe
2021-12-02 18:38:34 ERROR: migration finished with problems (duration 02:56:07)
TASK ERROR: migration problems
 
Last edited:
What kind of disks do you have in use? How is it if you watch it? Going kinda fast at the beginning and then at some point slowing down to that very slow speed?
 
So the network over which the migration is going is a 1Gbit one?

Is there anything in the VM causing a lot of writes that are so fast that the migration network could be a bottleneck?
 
Does anyone else have such a problem or is it just me?
There is also a problem with migration.
Source - Virtual Environment 6.4-8
Destination - 6.4-13




Code:
task started by HA resource agent
2022-01-18 00:24:26 use dedicated network address for sending migration traffic (192.168.231.103)
2022-01-18 00:24:26 starting migration of VM 224 to node 'n3' (192.168.231.103)
2022-01-18 00:24:26 starting VM 224 on remote node 'n3'
2022-01-18 00:24:30 start remote tunnel
2022-01-18 00:24:31 ssh tunnel ver 1
2022-01-18 00:24:31 starting online/live migration on unix:/run/qemu-server/224.migrate
2022-01-18 00:24:31 set migration capabilities
2022-01-18 00:24:31 migration downtime limit: 100 ms
2022-01-18 00:24:31 migration cachesize: 4.0 GiB
2022-01-18 00:24:31 set migration parameters
2022-01-18 00:24:31 start migrate command to unix:/run/qemu-server/224.migrate
2022-01-18 00:24:32 migration active, transferred 276.9 MiB of 24.0 GiB VM-state, 479.8 MiB/s
2022-01-18 00:24:32 xbzrle: send updates to 7714 pages in 195.2 KiB encoded memory, cache-miss 78.70%
2022-01-18 00:24:33 migration active, transferred 559.0 MiB of 24.0 GiB VM-state, 312.2 MiB/s
2022-01-18 00:24:33 xbzrle: send updates to 7714 pages in 195.2 KiB encoded memory, cache-miss 78.70%
2022-01-18 00:24:34 migration active, transferred 850.1 MiB of 24.0 GiB VM-state, 259.5 MiB/s
2022-01-18 00:24:34 xbzrle: send updates to 7714 pages in 195.2 KiB encoded memory, cache-miss 78.70%
2022-01-18 00:24:35 migration active, transferred 1.1 GiB of 24.0 GiB VM-state, 507.3 MiB/s
2022-01-18 00:24:35 xbzrle: send updates to 7714 pages in 195.2 KiB encoded memory, cache-miss 78.70%
2022-01-18 00:24:36 migration active, transferred 1.4 GiB of 24.0 GiB VM-state, 420.9 MiB/s
2022-01-18 00:24:36 xbzrle: send updates to 7714 pages in 195.2 KiB encoded memory, cache-miss 78.70%
2022-01-18 00:24:37 migration active, transferred 1.7 GiB of 24.0 GiB VM-state, 410.9 MiB/s
2022-01-18 00:24:37 xbzrle: send updates to 7714 pages in 195.2 KiB encoded memory, cache-miss 78.70%
2022-01-18 00:24:38 migration active, transferred 1.9 GiB of 24.0 GiB VM-state, 264.5 MiB/s
2022-01-18 00:24:38 xbzrle: send updates to 7714 pages in 195.2 KiB encoded memory, cache-miss 78.70%
2022-01-18 00:24:39 migration active, transferred 2.2 GiB of 24.0 GiB VM-state, 309.9 MiB/s
2022-01-18 00:24:39 xbzrle: send updates to 7714 pages in 195.2 KiB encoded memory, cache-miss 78.70%
2022-01-18 00:24:40 migration active, transferred 2.5 GiB of 24.0 GiB VM-state, 363.0 MiB/s
2022-01-18 00:24:40 xbzrle: send updates to 7714 pages in 195.2 KiB encoded memory, cache-miss 78.70%
2022-01-18 00:24:41 migration active, transferred 2.8 GiB of 24.0 GiB VM-state, 330.5 MiB/s
2022-01-18 00:24:41 xbzrle: send updates to 7714 pages in 195.2 KiB encoded memory, cache-miss 78.70%
2022-01-18 00:24:42 migration active, transferred 3.1 GiB of 24.0 GiB VM-state, 418.4 MiB/s
2022-01-18 00:24:42 xbzrle: send updates to 7714 pages in 195.2 KiB encoded memory, cache-miss 78.70%
...
...
...
2022-01-18 00:25:34 xbzrle: send updates to 7714 pages in 195.2 KiB encoded memory, cache-miss 78.70%
2022-01-18 00:25:35 migration active, transferred 18.0 GiB of 24.0 GiB VM-state, 408.5 MiB/s
2022-01-18 00:25:35 xbzrle: send updates to 7714 pages in 195.2 KiB encoded memory, cache-miss 78.70%
2022-01-18 00:25:36 migration active, transferred 18.3 GiB of 24.0 GiB VM-state, 475.2 MiB/s
2022-01-18 00:25:36 xbzrle: send updates to 7714 pages in 195.2 KiB encoded memory, cache-miss 78.70%
2022-01-18 00:25:37 migration active, transferred 18.6 GiB of 24.0 GiB VM-state, 511.1 MiB/s
2022-01-18 00:25:37 xbzrle: send updates to 7714 pages in 195.2 KiB encoded memory, cache-miss 78.70%
2022-01-18 00:25:38 migration active, transferred 19.0 GiB of 24.0 GiB VM-state, 18.4 MiB/s
2022-01-18 00:25:38 xbzrle: send updates to 7714 pages in 195.2 KiB encoded memory, cache-miss 1.16%
2022-01-18 00:25:39 migration active, transferred 19.0 GiB of 24.0 GiB VM-state, 24.6 MiB/s
2022-01-18 00:25:39 xbzrle: send updates to 7714 pages in 195.2 KiB encoded memory, cache-miss 1.16%
2022-01-18 00:25:40 migration status error: failed
2022-01-18 00:25:40 ERROR: online migrate failure - aborting
2022-01-18 00:25:40 aborting phase 2 - cleanup resources
2022-01-18 00:25:40 migrate_cancel
2022-01-18 00:25:43 ERROR: migration finished with problems (duration 00:01:17)
TASK ERROR: migration problems


Code:
agent: 1
balloon: 0
boot: order=scsi0
cores: 6
ide2: none,media=cdrom
memory: 24576
name: Op
net0: virtio=52:D6:49:25:88:2D,bridge=vmbr2001,rate=87
numa: 0
onboot: 1
ostype: win7
scsi0: HPFC2TB:vm-224-disk-1,cache=writeback,size=180G
scsi1: HPFC2TB:vm-224-disk-0,size=20G
scsihw: virtio-scsi-pci
smbios1: uuid=9e4d37e1-b3d6-413e-82f5-5d49bdce6728
sockets: 2
vmgenid: 8b1c881e-7517-42da-9465-f613fb004329

Until I understand why.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!