TASK ERROR: migration problems

djsami

Renowned Member
Aug 24, 2009
193
5
83
Hello

vm migration promlem.


Code:
()
2022-05-28 11:29:06 starting migration of VM 156 to node 'pve16'
2022-05-28 11:29:07 found local disk 'local-lvm-2:vm-156-disk-0' (in current VM config)
2022-05-28 11:29:07 starting VM 156 on remote node 'pve16'
2022-05-28 11:29:11 volume 'local-lvm-2:vm-156-disk-0' is 'local-lvm-2:vm-156-disk-0' on the target
2022-05-28 11:29:11 start remote tunnel
2022-05-28 11:29:12 ssh tunnel ver 1
2022-05-28 11:29:12 starting storage migration
2022-05-28 11:29:12 scsi0: start migration to nbd:unix:/run/qemu-server/156_nbd.migrate:exportname=drive-scsi0
drive mirror is starting for drive-scsi0
drive-scsi0: transferred 28.0 MiB of 40.0 GiB (0.07%) in 21s
drive-scsi0: transferred 139.0 MiB of 40.0 GiB (0.34%) in 22s

drive-scsi0: transferred 39.9 GiB of 40.0 GiB (99.85%) in 7m 10s
drive-scsi0: transferred 40.0 GiB of 40.0 GiB (100.00%) in 7m 11s, ready
all 'mirror' jobs are ready
2022-05-28 11:36:23 starting online/live migration on unix:/run/qemu-server/156.migrate
2022-05-28 11:36:23 set migration capabilities
2022-05-28 11:36:23 migration downtime limit: 100 ms
2022-05-28 11:36:23 migration cachesize: 256.0 MiB
2022-05-28 11:36:23 set migration parameters
2022-05-28 11:36:23 start migrate command to unix:/run/qemu-server/156.migrate
2022-05-28 11:36:24 migration active, transferred 104.0 MiB of 2.0 GiB VM-state, 104.2 MiB/s
2022-05-28 11:36:25 migration active, transferred 220.0 MiB of 2.0 GiB VM-state, 118.2 MiB/s
2022-05-28 11:36:26 migration active, transferred 326.8 MiB of 2.0 GiB VM-state, 121.7 MiB/s
2022-05-28 11:36:27 migration active, transferred 434.0 MiB of 2.0 GiB VM-state, 136.9 MiB/s
2022-05-28 11:36:28 migration active, transferred 541.6 MiB of 2.0 GiB VM-state, 127.3 MiB/s
2022-05-28 11:36:29 migration active, transferred 649.3 MiB of 2.0 GiB VM-state, 145.5 MiB/s
2022-05-28 11:36:30 migration active, transferred 757.7 MiB of 2.0 GiB VM-state, 134.2 MiB/s
2022-05-28 11:36:31 migration active, transferred 864.9 MiB of 2.0 GiB VM-state, 148.2 MiB/s
2022-05-28 11:36:32 migration active, transferred 972.7 MiB of 2.0 GiB VM-state, 109.2 MiB/s
2022-05-28 11:36:33 migration active, transferred 1.1 GiB of 2.0 GiB VM-state, 105.8 MiB/s
2022-05-28 11:36:34 migration active, transferred 1.2 GiB of 2.0 GiB VM-state, 165.2 MiB/s
2022-05-28 11:36:35 migration active, transferred 1.3 GiB of 2.0 GiB VM-state, 140.3 MiB/s
2022-05-28 11:36:36 migration active, transferred 1.4 GiB of 2.0 GiB VM-state, 105.8 MiB/s
2022-05-28 11:36:37 migration active, transferred 1.5 GiB of 2.0 GiB VM-state, 113.3 MiB/s
2022-05-28 11:36:38 migration active, transferred 1.6 GiB of 2.0 GiB VM-state, 121.5 MiB/s
2022-05-28 11:36:39 average migration speed: 129.1 MiB/s - downtime 31 ms
2022-05-28 11:36:39 migration status: completed
all 'mirror' jobs are ready
drive-scsi0: Completing block job_id...
drive-scsi0: Completed successfully.
drive-scsi0: mirror-job finished
2022-05-28 11:36:40 stopping NBD storage migration server on target.
2022-05-28 11:36:42 issuing guest fstrim
2022-05-28 11:36:46 ERROR: fstrim failed - command '/usr/bin/ssh -e none -o 'BatchMode=yes' -o 'HostKeyAlias=pve16' root@qm guest cmd 156 fstrim' failed: exit code 255
  Logical volume "vm-156-disk-0" successfully removed
2022-05-28 11:36:54 ERROR: migration finished with problems (duration 00:07:48)
TASK ERROR: migration problems

I don't understand why it is doing this?


Pve1

Code:
proxmox-ve: 7.2-1 (running kernel: 5.15.35-1-pve)
pve-manager: 7.2-4 (running version: 7.2-4/ca9d43cc)
pve-kernel-5.15: 7.2-3
pve-kernel-helper: 7.2-3
pve-kernel-5.13: 7.1-9
pve-kernel-5.11: 7.0-10
pve-kernel-5.15.35-1-pve: 5.15.35-3
pve-kernel-5.13.19-6-pve: 5.13.19-15
pve-kernel-5.13.19-4-pve: 5.13.19-9
pve-kernel-5.13.19-2-pve: 5.13.19-4
pve-kernel-5.11.22-7-pve: 5.11.22-12
pve-kernel-5.11.22-4-pve: 5.11.22-9
ceph-fuse: 15.2.14-pve1
corosync: 3.1.5-pve2
criu: 3.15-1+pve-1
glusterfs-client: 9.2-1
ifupdown2: 3.1.0-1+pmx3
ksm-control-daemon: 1.4-1
libjs-extjs: 7.0.0-1
libknet1: 1.22-pve2
libproxmox-acme-perl: 1.4.2
libproxmox-backup-qemu0: 1.3.1-1
libpve-access-control: 7.1-8
libpve-apiclient-perl: 3.2-1
libpve-common-perl: 7.2-2
libpve-guest-common-perl: 4.1-2
libpve-http-server-perl: 4.1-2
libpve-storage-perl: 7.2-4
libspice-server1: 0.14.3-2.1
lvm2: 2.03.11-2.1
lxc-pve: 4.0.12-1
lxcfs: 4.0.12-pve1
novnc-pve: 1.3.0-3
proxmox-backup-client: 2.2.1-1
proxmox-backup-file-restore: 2.2.1-1
proxmox-mini-journalreader: 1.3-1
proxmox-widget-toolkit: 3.5.1
pve-cluster: 7.2-1
pve-container: 4.2-1
pve-docs: 7.2-2
pve-edk2-firmware: 3.20210831-2
pve-firewall: 4.2-5
pve-firmware: 3.4-2
pve-ha-manager: 3.3-4
pve-i18n: 2.7-2
pve-qemu-kvm: 6.2.0-8
pve-xtermjs: 4.16.0-1
qemu-server: 7.2-3
smartmontools: 7.2-pve3
spiceterm: 3.2-2
swtpm: 0.7.1~bpo11+1
vncterm: 1.7-1
zfsutils-linux: 2.1.4-pve1

PVE16

Code:
proxmox-ve: 7.2-1 (running kernel: 5.15.35-1-pve)
pve-manager: 7.2-4 (running version: 7.2-4/ca9d43cc)
pve-kernel-5.15: 7.2-3
pve-kernel-helper: 7.2-3
pve-kernel-5.15.35-1-pve: 5.15.35-3
pve-kernel-5.15.30-2-pve: 5.15.30-3
ceph-fuse: 15.2.16-pve1
corosync: 3.1.5-pve2
criu: 3.15-1+pve-1
glusterfs-client: 9.2-1
ifupdown2: 3.1.0-1+pmx3
ksm-control-daemon: 1.4-1
libjs-extjs: 7.0.0-1
libknet1: 1.22-pve2
libproxmox-acme-perl: 1.4.2
libproxmox-backup-qemu0: 1.3.1-1
libpve-access-control: 7.1-8
libpve-apiclient-perl: 3.2-1
libpve-common-perl: 7.2-2
libpve-guest-common-perl: 4.1-2
libpve-http-server-perl: 4.1-2
libpve-storage-perl: 7.2-4
libspice-server1: 0.14.3-2.1
lvm2: 2.03.11-2.1
lxc-pve: 4.0.12-1
lxcfs: 4.0.12-pve1
novnc-pve: 1.3.0-3
proxmox-backup-client: 2.2.1-1
proxmox-backup-file-restore: 2.2.1-1
proxmox-mini-journalreader: 1.3-1
proxmox-widget-toolkit: 3.5.1
pve-cluster: 7.2-1
pve-container: 4.2-1
pve-docs: 7.2-2
pve-edk2-firmware: 3.20210831-2
pve-firewall: 4.2-5
pve-firmware: 3.4-2
pve-ha-manager: 3.3-4
pve-i18n: 2.7-2
pve-qemu-kvm: 6.2.0-8
pve-xtermjs: 4.16.0-1
qemu-server: 7.2-3
smartmontools: 7.2-pve3
spiceterm: 3.2-2
swtpm: 0.7.1~bpo11+1
vncterm: 1.7-1
zfsutils-linux: 2.1.4-pve1
 
hi,

can you ssh between the two nodes without problem?

could you also post the VM config for 156? qm config 156