migration failed. 2 diffrent error

Northern_War

Member
Aug 28, 2020
4
0
6
35
Hello everybody.
I need help with two errors.
I have 2 diffrent VMs on one cluster node.
And I want to migrate them to another node. It can be 2 diffrent nodes.
But i have next errors.
First VM contains 2 disks (zfs)
1)
Code:
drive-scsi1: transferred: 34224472064 bytes remaining: 30204100608 bytes total: 64428572672 bytes progression: 53.12 % busy: 1 ready: 0
drive-scsi1: transferred: 34325135360 bytes remaining: 30103437312 bytes total: 64428572672 bytes progression: 53.28 % busy: 1 ready: 0
drive-scsi1: Cancelling block job
drive-scsi1: Done.
2020-08-28 15:54:43 ERROR: online migrate failure - mirroring error: drive-scsi1: mirroring has been cancelled
2020-08-28 15:54:43 aborting phase 2 - cleanup resources
2020-08-28 15:54:43 migrate_cancel
2020-08-28 15:54:59 ERROR: migration finished with problems (duration 00:06:04)
TASK ERROR: migration problems
I have tried to migrate this VM several times to different nodes but it always failed. i saw zfs with VM-name on target node, but..
+
px-node-1 pvedaemon[6856]: VM 146 qmp command failed - VM 146 qmp command 'block-job-cancel' failed - Block job 'drive-scsi1' not found

2) Second VM was offline.
Code:
2020-08-28 15:25:48 found local disk 'data_1:vm-109-disk-0' (in current VM config)
2020-08-28 15:25:48 copying local disk images
full send of data_1/vm-109-disk-0@__migration__ estimated size is 5.14G
total estimated size is 5.14G
TIME        SENT   SNAPSHOT data_1/vm-109-disk-0@__migration__
15:25:50   2.10M   data_1/vm-109-disk-0@__migration__
15:25:51   2.10M   data_1/vm-109-disk-0@__migration__
...
15:27:16   5.12G   data_1/vm-109-disk-0@__migration__
command 'zfs destroy data_1/vm-109-disk-0@__migration__' failed: got timeout
send/receive failed, cleaning up snapshot(s)..
2020-08-28 15:28:09 ERROR: Failed to sync data - command 'set -o pipefail && pvesm export data_1:vm-109-disk-0 zfs - -with-snapshots 0 -snapshot __migration__ | /usr/bin/ssh -e none -o 'BatchMode=yes' -o 'HostKeyAlias=proxmox-node-4' root@10.1.1.4 -- pvesm import data_1:vm-109-disk-0 zfs - -with-snapshots 0 -delete-snapshot __migration__' failed: exit code 4
2020-08-28 15:28:09 aborting phase 1 - cleanup resources
2020-08-28 15:28:09 ERROR: found stale volume copy 'data_1:vm-109-disk-0' on node 'proxmox-node-4'
2020-08-28 15:28:09 ERROR: migration aborted (duration 00:02:21): Failed to sync data - command 'set -o pipefail && pvesm export data_1:vm-109-disk-0 zfs - -with-snapshots 0 -snapshot __migration__ | /usr/bin/ssh -e none -o 'BatchMode=yes' -o 'HostKeyAlias=proxmox-node-4' root@10.1.1.4 -- pvesm import data_1:vm-109-disk-0 zfs - -with-snapshots 0 -delete-snapshot __migration__' failed: exit code 4
TASK ERROR: migration aborted

Source node
Code:
root@proxmox-node-1:~#
pveversion -v
proxmox-ve: 6.1-2 (running kernel: 5.3.13-1-pve)
pve-manager: 6.1-3 (running version: 6.1-3/37248ce6)
pve-kernel-5.3: 6.1-1
pve-kernel-helper: 6.1-1
pve-kernel-5.0: 6.0-11
pve-kernel-5.3.13-1-pve: 5.3.13-1
pve-kernel-5.0.21-5-pve: 5.0.21-10
pve-kernel-5.0.15-1-pve: 5.0.15-1
ceph-fuse: 12.2.11+dfsg1-2.1+b1
corosync: 3.0.2-pve4
criu: 3.11-3
glusterfs-client: 5.5-3
ifupdown: 0.8.35+pve1
ksm-control-daemon: 1.3-1
libjs-extjs: 6.0.1-10
libknet1: 1.13-pve1
libpve-access-control: 6.0-5
libpve-apiclient-perl: 3.0-2
libpve-common-perl: 6.0-9
libpve-guest-common-perl: 3.0-3
libpve-http-server-perl: 3.0-3
libpve-storage-perl: 6.1-2
libqb0: 1.0.5-1
libspice-server1: 0.14.2-4~pve6+1
lvm2: 2.03.02-pve3
lxc-pve: 3.2.1-1
lxcfs: 3.0.3-pve60
novnc-pve: 1.1.0-1
proxmox-mini-journalreader: 1.1-1
proxmox-widget-toolkit: 2.1-1
pve-cluster: 6.1-2
pve-container: 3.0-14
pve-docs: 6.1-3
pve-edk2-firmware: 2.20191127-1
pve-firewall: 4.0-9
pve-firmware: 3.0-4
pve-ha-manager: 3.0-8
pve-i18n: 2.0-3
pve-qemu-kvm: 4.1.1-2
pve-xtermjs: 3.13.2-1
qemu-server: 6.1-3
smartmontools: 7.0-pve2
spiceterm: 3.1-1
vncterm: 1.6-1
zfsutils-linux: 0.8.2-pve2

Target node:
Code:
root@proxmox-node-4:~# pveversion  -v
proxmox-ve: 6.2-1 (running kernel: 5.4.55-1-pve)
pve-manager: 6.2-11 (running version: 6.2-11/22fb4983)
pve-kernel-5.4: 6.2-5
pve-kernel-helper: 6.2-5
pve-kernel-5.0: 6.0-11
pve-kernel-5.4.55-1-pve: 5.4.55-1
pve-kernel-5.0.21-5-pve: 5.0.21-10
pve-kernel-5.0.15-1-pve: 5.0.15-1
ceph-fuse: 12.2.11+dfsg1-2.1+b1
corosync: 3.0.4-pve1
criu: 3.11-3
glusterfs-client: 5.5-3
ifupdown: 0.8.35+pve1
ksm-control-daemon: 1.3-1
libjs-extjs: 6.0.1-10
libknet1: 1.16-pve1
libproxmox-acme-perl: 1.0.4
libpve-access-control: 6.1-2
libpve-apiclient-perl: 3.0-3
libpve-common-perl: 6.1-5
libpve-guest-common-perl: 3.1-2
libpve-http-server-perl: 3.0-6
libpve-storage-perl: 6.2-6
libqb0: 1.0.5-1
libspice-server1: 0.14.2-4~pve6+1
lvm2: 2.03.02-pve4
lxc-pve: 4.0.3-1
lxcfs: 4.0.3-pve3
novnc-pve: 1.1.0-1
proxmox-mini-journalreader: 1.1-1
proxmox-widget-toolkit: 2.2-10
pve-cluster: 6.1-8
pve-container: 3.1-12
pve-docs: 6.2-5
pve-edk2-firmware: 2.20200531-1
pve-firewall: 4.1-2
pve-firmware: 3.1-2
pve-ha-manager: 3.0-9
pve-i18n: 2.1-3
pve-qemu-kvm: 5.0.0-12
pve-xtermjs: 4.7.0-1
qemu-server: 6.2-11
smartmontools: 7.1-pve2
spiceterm: 3.1-1
vncterm: 1.6-2
zfsutils-linux: 0.8.4-pve1
 
Last edited:
hi,

can you update all your nodes to the latest version with apt update && apt dist-upgrade and check the pveversion -v versions from all to see if they match? and then please try again
 
hi, sorry for long response.
My main task is to update the entire proxmox cluster.
For this purpose, I tried to move the VMs to another cluster nodes, but not all VM have migrated successfully.
First VM from my previous post migrated successfully only when turned off.
I tried to figure out what was happening at the time of migration and came to the command 'qm monitor' and at this point was problem. There no task to migrate after 53% of migration.
px-node-1 pvedaemon[6856]: VM 146 qmp command failed - VM 146 qmp command 'block-job-cancel' failed - Block job 'drive-scsi1' not found
 
can you test if you can ssh between the nodes? it should work without any interaction

how big are the VMs that you're trying to migrate? could you post the configurations from qm config VMID
 
Bash:
# qm config 146
balloon: 2048
bootdisk: scsi0
cores: 4
ide2: none,media=cdrom
memory: 4096
name: some-vm-on-proxmox.my.zone
net0: virtio=xx:xx:xx:xx:xx:xx,bridge=vmbr0,firewall=1,tag=36
numa: 0
onboot: 1
ostype: l26
scsi0: data_2:vm-146-disk-0,size=5G
scsi1: data_2:vm-146-disk-1,size=32G
scsihw: virtio-scsi-pci
smbios1: uuid=8ebff6ef-c36d-4e69-ba03-847025427639
sockets: 1
vmgenid: d7672a52-a429-4a7e-97dc-e2dd63911e22

I had problems several times with different target nodes.
i didn't have problem with connections between cluster nodes.
I already migrated this VM through shutdown.
 
okay config looks pretty normal, the disks are on data_2. what is that storage backend? could you post the contents of /etc/pve/storage.cfg

i'm asking because of this error here:
Code:
px-node-1 pvedaemon[6856]: VM 146 qmp command failed - VM 146 qmp command 'block-job-cancel' failed - Block job 'drive-scsi1' not found

since scsi1 isn't being found during migration
 
Bash:
dir: local
        path /var/lib/vz
        content rootdir,iso,images,snippets
        maxfiles 0
        shared 0

zfspool: data_1
        pool data_1
        content rootdir,images
        sparse 0

zfspool: data_2
        pool data_2
        content images,rootdir
        sparse 0

nfs: ds10
        export /vol/proxmox
        path /mnt/pve/ds10
        server 10.1.10.35
        content backup,iso
        maxfiles 1
        options vers=3

nfs: Backups
        export /BackupData
        path /mnt/pve/Backups
        server backup-01.my.zone
        content backup,vztmpl
        maxfiles 3
        options vers=3

"
since scsi1 isn't being found during migration
Live migration started fine, but after a while (around 50% of migration) this error was received.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!