[SOLVED] No solution found for migration error found stale volume copy

reswob

New Member
Apr 13, 2021
4
0
1
55
3 node proxmox cluster, 6.3-6 version


Here's my error:
Code:
2021-04-26 21:14:07 starting migration of VM 104 to node 'DC-LA' (192.168.1.22)
2021-04-26 21:14:08 found local disk 'local-lvm:base-104-disk-0' (in current VM config)
2021-04-26 21:14:08 copying local disk images
2021-04-26 21:14:09 illegal name 'base-104-disk-0' - sould be 'vm-104-*'
2021-04-26 21:14:09 command 'dd 'if=/dev/pve/base-104-disk-0' 'bs=64k'' failed: got signal 13
send/receive failed, cleaning up snapshot(s)..
2021-04-26 21:14:10 ERROR: Failed to sync data - storage migration for 'local-lvm:base-104-disk-0' to storage 'local-lvm' failed - command 'set -o pipefail && pvesm export local-lvm:base-104-disk-0 raw+size - -with-snapshots 0 | /usr/bin/ssh -e none -o 'BatchMode=yes' -o 'HostKeyAlias=DC-LA' root@192.168.1.22 -- pvesm import local-lvm:base-104-disk-0 raw+size - -with-snapshots 0 -allow-rename 1' failed: exit code 255
2021-04-26 21:14:10 aborting phase 1 - cleanup resources
2021-04-26 21:14:10 ERROR: found stale volume copy 'local-lvm:base-104-disk-0' on node 'DC-LA'
2021-04-26 21:14:10 ERROR: migration aborted (duration 00:00:03): Failed to sync data - storage migration for 'local-lvm:base-104-disk-0' to storage 'local-lvm' failed - command 'set -o pipefail && pvesm export local-lvm:base-104-disk-0 raw+size - -with-snapshots 0 | /usr/bin/ssh -e none -o 'BatchMode=yes' -o 'HostKeyAlias=DC-LA' root@192.168.1.22 -- pvesm import local-lvm:base-104-disk-0 raw+size - -with-snapshots 0 -allow-rename 1' failed: exit code 255
TASK ERROR: migration aborted

I've looked at:

https://techblog.jeppson.org/2018/03/proxmox-vm-migration-failed-found-stale-volume-copy/
https://www.reddit.com/r/Proxmox/comments/ahdo5j/unable_to_migrate_old_vms_after_setting_up_shared/
https://forum.proxmox.com/threads/migration-fails-found-stale-volume-copy.70835/ (Translated solution: Problem solved, there were still remnants of the subvolume in the / rpool directory to "rm -r / rpool / data / subvol-103-disk-0" it was possible to migrate without any problem)

and I'm looking over this: https://forum.proxmox.com/threads/how-i-can-remove-directory-entry-from-gui.50006/

But nothing is working so far.

I can't find where DC-LA thinks this VM exists.

Code:
root@DC-LA:~# qm rescan --vmid 104
rescan volumes...
Configuration file 'nodes/DC-LA/qemu-server/104.conf' does not exist


Code:
root@DC-LA:~#  pvesm list la1-lvm
Volid Format  Type      Size VMID
root@DC-LA:~#  pvesm list local
Volid Format  Type      Size VMID
root@DC-LA:~#  pvesm list local-lvm
Volid Format  Type      Size VMID
root@DC-LA:~#  pvesm list LA-pool3
Volid Format  Type      Size VMID
root@DC-LA:~#  pvesm list LA-pool4
Volid Format  Type      Size VMID
root@DC-LA:~#  pvesm list ISO2storage
Volid                                                                                        Format  Type             Size VMID
ISO2storage:100/base-100-disk-0.vmdk                                                         vmdk    images    80530636800 100
ISO2storage:101/base-101-disk-0.qcow2                                                        qcow2   images    80530636800 101
ISO2storage:iso/17763.737.190906-2324.rs5_release_svc_refresh_SERVER_EVAL_x64FRE_en-us_1.iso iso     iso        5296713728
ISO2storage:iso/CentOS-7-x86_64-Minimal-1611.iso                                             iso     iso         713031680
ISO2storage:iso/ubuntu-18.04.4-live-server-amd64.iso                                         iso     iso         912261120
ISO2storage:iso/ubuntu-20.04.2.0-desktop-amd64.iso                                           iso     iso        2877227008
ISO2storage:iso/virtio-win-0.1.190.iso                                                       iso     iso         501745664
root@DC-LA:~# pvesm path 'local-lvm:base-104-disk-0'
/dev/pve/base-104-disk-0

Here is the 104.conf

Code:
boot: order=scsi0;net0
cores: 2
ide2: none,media=cdrom
memory: 8192
name: Splunk1
net0: virtio=22:A5:AA:72:53:A9,bridge=vmbr1,firewall=1
numa: 0
ostype: l26
scsi0: local-lvm:base-104-disk-0,size=75G
scsihw: virtio-scsi-pci
smbios1: uuid=b7d203fd-bb63-4274-86ba-5a39a9775b58
sockets: 4
template: 1
vmgenid: 29c555ec-8706-483f-8e1c-71d11b5e7e88

Here is my storage.cfg

Code:
dir: local
        path /var/lib/vz
        content backup,iso,vztmpl

lvmthin: local-lvm
        thinpool data
        vgname pve
        content rootdir,images

lvmthin: new-lvm
        thinpool tpool1
        vgname vol1
        content images,rootdir
        nodes DC-NYC

lvmthin: la1-lvm
        thinpool tpool2
        vgname vol2
        content images,rootdir
        nodes DC-LA

cifs: ISO2storage
        path /mnt/pve/ISO2storage
        server 192.168.50.200
        share ISO-VMS
        content images,iso
        smbversion 2.0
        username proxmox

zfspool: LA-pool3
        pool LA-pool3
        content rootdir,images
        mountpoint /LA-pool3
        nodes DC-LA
        sparse 1

zfspool: LA-pool4
        pool LA-pool4
        content images,rootdir
        mountpoint /LA-pool4
        nodes DC-LA
        sparse 1

dir: usb
        path /mnt/usb
        content iso,images
        nodes DC-NYC
        prune-backups keep-all=1
        shared 0


Any suggestions? Or pointers to something I missed?
 
Hi,
3 node proxmox cluster, 6.3-6 version


Here's my error:
Code:
2021-04-26 21:14:07 starting migration of VM 104 to node 'DC-LA' (192.168.1.22)
2021-04-26 21:14:08 found local disk 'local-lvm:base-104-disk-0' (in current VM config)
2021-04-26 21:14:08 copying local disk images
2021-04-26 21:14:09 illegal name 'base-104-disk-0' - sould be 'vm-104-*'
2021-04-26 21:14:09 command 'dd 'if=/dev/pve/base-104-disk-0' 'bs=64k'' failed: got signal 13
send/receive failed, cleaning up snapshot(s)..
2021-04-26 21:14:10 ERROR: Failed to sync data - storage migration for 'local-lvm:base-104-disk-0' to storage 'local-lvm' failed - command 'set -o pipefail && pvesm export local-lvm:base-104-disk-0 raw+size - -with-snapshots 0 | /usr/bin/ssh -e none -o 'BatchMode=yes' -o 'HostKeyAlias=DC-LA' root@192.168.1.22 -- pvesm import local-lvm:base-104-disk-0 raw+size - -with-snapshots 0 -allow-rename 1' failed: exit code 255
2021-04-26 21:14:10 aborting phase 1 - cleanup resources
2021-04-26 21:14:10 ERROR: found stale volume copy 'local-lvm:base-104-disk-0' on node 'DC-LA'
2021-04-26 21:14:10 ERROR: migration aborted (duration 00:00:03): Failed to sync data - storage migration for 'local-lvm:base-104-disk-0' to storage 'local-lvm' failed - command 'set -o pipefail && pvesm export local-lvm:base-104-disk-0 raw+size - -with-snapshots 0 | /usr/bin/ssh -e none -o 'BatchMode=yes' -o 'HostKeyAlias=DC-LA' root@192.168.1.22 -- pvesm import local-lvm:base-104-disk-0 raw+size - -with-snapshots 0 -allow-rename 1' failed: exit code 255
TASK ERROR: migration aborted
storage migration for base volumes is not yet implemented. As a workaround you can fully clone the template, migrate the clone and then convert to template on the other node.

I can't find where DC-LA thinks this VM exists.

Code:
root@DC-LA:~# qm rescan --vmid 104
rescan volumes...
Configuration file 'nodes/DC-LA/qemu-server/104.conf' does not exist
The VM should still be on the old node, because the migration failed.
 
I have exactly the same problem. It starts to generate the qcow2 image on destination node, but fails with signal 13. Did you find a solution to perform a "non-live" migration?
 
Hi @fant,
some storage type combinations are still not possible to use for offline migration, but to see if that is the issue, please share the full migration task log, the VM configuration qm config ID with the numerical ID and the output of pveversion -v.
 
Hi @fiona ,

Thank you for your answer.

Task log:

Code:
2026-01-23 13:21:48 starting migration of VM 99998 to node 'proxmox3' (192.168.1.58)
2026-01-23 13:21:48 found local disk 'raidsystem:99998/vm-99998-disk-0.qcow2' (attached)
2026-01-23 13:21:48 copying local disk images
2026-01-23 13:21:50 Formatting '/raidsystem/images/99998/vm-99998-disk-0.qcow2', fmt=qcow2 cluster_size=65536 extended_l2=off preallocation=metadata compression_type=zlib size=34359738368 lazy_refcounts=off refcount_bits=16
2026-01-23 13:21:53 98144256 bytes (98 MB, 94 MiB) copied, 3 s, 32.6 MB/s
2026-01-23 13:21:56 269488128 bytes (269 MB, 257 MiB) copied, 6 s, 44.9 MB/s
2026-01-23 13:21:59 439521280 bytes (440 MB, 419 MiB) copied, 9 s, 48.8 MB/s
2026-01-23 13:22:02 611061760 bytes (611 MB, 583 MiB) copied, 12 s, 50.9 MB/s
2026-01-23 13:22:05 762941440 bytes (763 MB, 728 MiB) copied, 15 s, 50.1 MB/s
2026-01-23 13:22:08 919932928 bytes (920 MB, 877 MiB) copied, 18 s, 51.1 MB/s
2026-01-23 13:22:11 1090424832 bytes (1.1 GB, 1.0 GiB) copied, 21 s, 51.9 MB/s
2026-01-23 13:22:14 1257443328 bytes (1.3 GB, 1.2 GiB) copied, 24 s, 52.4 MB/s
2026-01-23 13:22:17 1404506112 bytes (1.4 GB, 1.3 GiB) copied, 27 s, 52.0 MB/s
2026-01-23 13:22:20 1564250112 bytes (1.6 GB, 1.5 GiB) copied, 30 s, 52.1 MB/s
2026-01-23 13:22:23 1713672192 bytes (1.7 GB, 1.6 GiB) copied, 33 s, 51.9 MB/s
2026-01-23 13:22:26 1806274560 bytes (1.8 GB, 1.7 GiB) copied, 36 s, 50.2 MB/s
2026-01-23 13:22:29 1928663040 bytes (1.9 GB, 1.8 GiB) copied, 39 s, 49.4 MB/s
2026-01-23 13:22:32 2087063552 bytes (2.1 GB, 1.9 GiB) copied, 42 s, 49.7 MB/s
2026-01-23 13:22:35 2262110208 bytes (2.3 GB, 2.1 GiB) copied, 45 s, 50.3 MB/s
2026-01-23 13:22:38 2437582848 bytes (2.4 GB, 2.3 GiB) copied, 48 s, 50.8 MB/s
2026-01-23 13:22:41 2612269056 bytes (2.6 GB, 2.4 GiB) copied, 51 s, 51.2 MB/s
2026-01-23 13:22:44 2786103296 bytes (2.8 GB, 2.6 GiB) copied, 54 s, 51.6 MB/s
2026-01-23 13:22:47 2961018880 bytes (3.0 GB, 2.8 GiB) copied, 57 s, 51.9 MB/s
2026-01-23 13:22:50 3136786432 bytes (3.1 GB, 2.9 GiB) copied, 60 s, 52.3 MB/s
2026-01-23 13:23:00 3710095360 bytes (3.7 GB, 3.5 GiB) copied, 70 s, 53.0 MB/s
2026-01-23 13:23:10 4270919680 bytes (4.3 GB, 4.0 GiB) copied, 80 s, 53.4 MB/s
2026-01-23 13:23:20 4833382400 bytes (4.8 GB, 4.5 GiB) copied, 90 s, 53.7 MB/s
2026-01-23 13:23:30 5414948864 bytes (5.4 GB, 5.0 GiB) copied, 100 s, 54.1 MB/s
2026-01-23 13:23:40 5976199168 bytes (6.0 GB, 5.6 GiB) copied, 110 s, 54.3 MB/s
2026-01-23 13:23:50 6513332224 bytes (6.5 GB, 6.1 GiB) copied, 120 s, 54.3 MB/s
2026-01-23 13:24:00 7048171520 bytes (7.0 GB, 6.6 GiB) copied, 130 s, 54.2 MB/s
2026-01-23 13:24:10 7614787584 bytes (7.6 GB, 7.1 GiB) copied, 140 s, 54.4 MB/s
2026-01-23 13:24:20 8195870720 bytes (8.2 GB, 7.6 GiB) copied, 150 s, 54.6 MB/s
2026-01-23 13:24:30 8773865472 bytes (8.8 GB, 8.2 GiB) copied, 160 s, 54.8 MB/s
2026-01-23 13:24:40 9326333952 bytes (9.3 GB, 8.7 GiB) copied, 170 s, 54.9 MB/s
2026-01-23 13:24:50 9853865984 bytes (9.9 GB, 9.2 GiB) copied, 180 s, 54.7 MB/s
2026-01-23 13:25:00 10394013696 bytes (10 GB, 9.7 GiB) copied, 190 s, 54.7 MB/s
2026-01-23 13:25:10 10951659520 bytes (11 GB, 10 GiB) copied, 200 s, 54.8 MB/s
2026-01-23 13:25:20 11517169664 bytes (12 GB, 11 GiB) copied, 210 s, 54.8 MB/s
2026-01-23 13:25:30 12092542976 bytes (12 GB, 11 GiB) copied, 220 s, 55.0 MB/s
2026-01-23 13:25:40 12670111744 bytes (13 GB, 12 GiB) copied, 230 s, 55.1 MB/s
2026-01-23 13:41:38 client_loop: send disconnect: Broken pipe
2026-01-23 13:41:38 command 'dd 'if=/raidsystem/images/99998/vm-99998-disk-0.qcow2' 'bs=4k' 'status=progress'' failed: got signal 13
2026-01-23 13:41:38 ERROR: storage migration for 'raidsystem:99998/vm-99998-disk-0.qcow2' to storage 'raidsystem' failed - command 'set -o pipefail && pvesm export raidsystem:99998/vm-99998-disk-0.qcow2 qcow2+size - -with-snapshots 1 | /usr/bin/ssh -e none -o 'BatchMode=yes' -o 'HostKeyAlias=proxmox3' -o 'UserKnownHostsFile=/etc/pve/nodes/proxmox3/ssh_known_hosts' -o 'GlobalKnownHostsFile=none' root@192.168.1.58 -- pvesm import raidsystem:99998/vm-99998-disk-0.qcow2 qcow2+size - -with-snapshots 1 -allow-rename 1' failed: exit code 255
2026-01-23 13:41:38 aborting phase 1 - cleanup resources
2026-01-23 13:41:38 ERROR: migration aborted (duration 00:19:50): storage migration for 'raidsystem:99998/vm-99998-disk-0.qcow2' to storage 'raidsystem' failed - command 'set -o pipefail && pvesm export raidsystem:99998/vm-99998-disk-0.qcow2 qcow2+size - -with-snapshots 1 | /usr/bin/ssh -e none -o 'BatchMode=yes' -o 'HostKeyAlias=proxmox3' -o 'UserKnownHostsFile=/etc/pve/nodes/proxmox3/ssh_known_hosts' -o 'GlobalKnownHostsFile=none' root@192.168.1.58 -- pvesm import raidsystem:99998/vm-99998-disk-0.qcow2 qcow2+size - -with-snapshots 1 -allow-rename 1' failed: exit code 255
TASK ERROR: migration aborted

qm config id:

Code:
agent: 1
boot: order=scsi0
cores: 1
cpu: qemu64
ide2: none,media=cdrom
memory: 2048
meta: creation-qemu=10.1.2,ctime=1768552644
name: machmichkaputt
net0: virtio=BC:24:11:15:8A:DD,bridge=vmbr0,firewall=1
numa: 0
ostype: l26
scsi0: raidsystem:99998/vm-99998-disk-0.qcow2,format=qcow2,iothread=1,size=32G
scsi1: iscsi-lvm1:vm-99998-disk-0.qcow2,backup=0,iothread=1,size=5G
scsihw: virtio-scsi-single
smbios1: uuid=717b62a1-61ac-41d4-8812-a37f78fa493f
sockets: 1
vmgenid: e6f1175f-76a5-4824-9a58-81bf02c4257d

pveversion -v :

Code:
proxmox-ve: 9.1.0 (running kernel: 6.17.4-1-pve)
pve-manager: 9.1.4 (running version: 9.1.4/5ac30304265fbd8e)
proxmox-kernel-helper: 9.0.4
proxmox-kernel-6.17.4-1-pve-signed: 6.17.4-1
proxmox-kernel-6.17: 6.17.4-1
proxmox-kernel-6.8: 6.8.12-17
proxmox-kernel-6.8.12-17-pve-signed: 6.8.12-17
proxmox-kernel-6.8.12-9-pve-signed: 6.8.12-9
ceph-fuse: 19.2.3-pve2
corosync: 3.1.9-pve2
criu: 4.1.1-1
frr-pythontools: 10.4.1-1+pve1
ifupdown2: 3.3.0-1+pmx11
intel-microcode: 3.20250812.1~deb13u1
ksm-control-daemon: 1.5-1
libjs-extjs: 7.0.0-5
libproxmox-acme-perl: 1.7.0
libproxmox-backup-qemu0: 2.0.1
libproxmox-rs-perl: 0.4.1
libpve-access-control: 9.0.5
libpve-apiclient-perl: 3.4.2
libpve-cluster-api-perl: 9.0.7
libpve-cluster-perl: 9.0.7
libpve-common-perl: 9.1.3
libpve-guest-common-perl: 6.0.2
libpve-http-server-perl: 6.0.5
libpve-network-perl: 1.2.4
libpve-rs-perl: 0.11.4
libpve-storage-perl: 9.1.0
libspice-server1: 0.15.2-1+b1
lvm2: 2.03.31-2+pmx1
lxc-pve: 6.0.5-3
lxcfs: 6.0.4-pve1
novnc-pve: 1.6.0-3
proxmox-backup-client: 4.1.0-1
proxmox-backup-file-restore: 4.1.0-1
proxmox-backup-restore-image: 1.0.0
proxmox-firewall: 1.2.1
proxmox-kernel-helper: 9.0.4
proxmox-mail-forward: 1.0.2
proxmox-mini-journalreader: 1.6
proxmox-offline-mirror-helper: 0.7.3
proxmox-widget-toolkit: 5.1.5
pve-cluster: 9.0.7
pve-container: 6.0.18
pve-docs: 9.1.2
pve-edk2-firmware: 4.2025.05-2
pve-esxi-import-tools: 1.0.1
pve-firewall: 6.0.4
pve-firmware: 3.17-2
pve-ha-manager: 5.1.0
pve-i18n: 3.6.6
pve-qemu-kvm: 10.1.2-5
pve-xtermjs: 5.5.0-3
qemu-server: 9.1.3
smartmontools: 7.4-pve1
spiceterm: 3.4.1
swtpm: 0.8.0+pve3
vncterm: 1.9.1
zfsutils-linux: 2.3.4-pve1

Note: Path /raidsystem is an ext4 partion mount point existing locally on each node on a harddisk or hardware raid.

Best,
Fant
 
Last edited: