Migration error - failed: got signal 13

aqwserf

Active Member
Nov 25, 2019
9
0
41
55
Hi!
I recently played around with PDM, and although I managed to migrate between two hosts a while ago, I tried again today and got the same error again ang again (PDM log):

Code:
...
2025-11-24 14:16:08 local WS tunnel version: 2
2025-11-24 14:16:08 remote WS tunnel version: 2
2025-11-24 14:16:08 minimum required WS tunnel version: 2
2025-11-24 14:16:08 websocket tunnel started
2025-11-24 14:16:08 starting migration of CT 1033100 to node 'proxmox-2' (192.168.33.30)
tunnel: -> sending command "bwlimit" to remote
tunnel: <- got reply
2025-11-24 14:16:08 found local volume 'local-lvm:vm-1033100-disk-0' (in current VM config)
tunnel: -> sending command "disk-import" to remote
tunnel: <- got reply
tunnel: accepted new connection on '/run/pve/1033100.storage'
tunnel: requesting WS ticket via tunnel
tunnel: established new WS for forwarding '/run/pve/1033100.storage'

119144448 bytes (119 MB, 114 MiB) copied, 1 s, 119 MB/s
236650496 bytes (237 MB, 226 MiB) copied, 2 s, 118 MB/s
354222080 bytes (354 MB, 338 MiB) copied, 3 s, 118 MB/stunnel: done handling forwarded connection from '/run/pve/1033100.storage'
command 'dd 'if=/dev/pve/vm-1033100-disk-0' 'bs=64k' 'status=progress'' failed: got signal 13
command 'set -o pipefail && pvesm export local-lvm:vm-1033100-disk-0 raw+size - -with-snapshots 0' failed: exit code 255

tunnel: -> sending command "query-disk-import" to remote
tunnel: <- got reply
2025-11-24 14:16:13 disk-import:   Logical volume "vm-1033100-disk-0" created.
tunnel: -> sending command "query-disk-import" to remote
tunnel: <- got reply
2025-11-24 14:16:14 disk-import:   Logical volume pve/vm-1033100-disk-0 changed.
tunnel: -> sending command "query-disk-import" to remote
tunnel: <- got reply
2025-11-24 14:16:15 disk-import: 260+20749 records in
tunnel: -> sending command "query-disk-import" to remote
tunnel: <- got reply
2025-11-24 14:16:16 disk-import: 260+20749 records out
tunnel: -> sending command "query-disk-import" to remote
tunnel: <- got reply
2025-11-24 14:16:17 disk-import: 370540544 bytes (371 MB, 353 MiB) copied, 3.14029 s, 118 MB/s
tunnel: -> sending command "query-disk-import" to remote
tunnel: <- got reply
2025-11-24 14:16:18 ERROR: storage migration for 'local-lvm:vm-1033100-disk-0' to storage 'local-lvm' failed - command 'set -o pipefail && pvesm export local-lvm:vm-1033100-disk-0 raw+size - -with-snapshots 0' failed: exit code 255
2025-11-24 14:16:18 aborting phase 1 - cleanup resources
2025-11-24 14:16:18 ERROR: found stale volume copy 'local-lvm:vm-1033100-disk-0' on node 'proxmox-2'
tunnel: -> sending command "quit" to remote
tunnel: <- got reply
2025-11-24 14:16:19 start final cleanup
2025-11-24 14:16:19 ERROR: migration aborted (duration 00:00:11): storage migration for 'local-lvm:vm-1033100-disk-0' to storage 'local-lvm' failed - command 'set -o pipefail && pvesm export local-lvm:vm-1033100-disk-0 raw+size - -with-snapshots 0' failed: exit code 255
TASK ERROR: migration aborted

As the 2 hosts were not on the exact same version, I've upgraded them, as well as PDM to their latest version, but no success either (without rebooting them though...):
Code:
$ pveversion
pve-manager/9.1.1/42db4a6cf33dac83 (running kernel: 6.14.11-3-pve)

$ pveversion
pve-manager/9.1.1/42db4a6cf33dac83 (running kernel: 6.14.11-1-pve)

Source and destination storage are almost identical (lvmthin):
Code:
$ cat /etc/pve/storage.cfg
dir: local
    path /var/lib/vz
    content snippets
    prune-backups keep-last=1
    shared 0

lvmthin: local-lvm
    thinpool data
    vgname pve
    content images,rootdir

dir: vm
    path /mnt/vm
    content iso,backup,images,rootdir,vztmpl
    shared 0

$ 
cat /etc/pve/storage.cfg
dir: local
    path /var/lib/vz
    content snippets,iso,backup,vztmpl
    shared 0

lvmthin: local-lvm
    thinpool data
    vgname pve
    content images,rootdir

I managed to find a working command to migrate the storage using a third host for reference (so the pvesm export between hosts work):
ssh host1 pvesm export local-lvm:vm-1123140-disk-0 raw+size - | ssh host2 pvesm import local-lvm:vm-1123140-disk-0 raw+size -

Is that a known issue? Or am I missing something?

Thanks!