[SOLVED] Erasing all VM disks after a failed VM migration task

eps1l

New Member
Mar 1, 2021
10
0
1
33
Good day!
I want to migrate running VM between 2 Proxmox nodes(10.105.2.29 -> 10.105.2.21), joined in cluster.
But migration task failed("VM 101 is not running") AND all VM disks were erased.
Bash:
10.105.2.29
ls -la /mnt/storage1/images/101
total 8
drwxr----- 2 root root 4096 Feb 28 18:30 .
drwxr-xr-x 4 root root 4096 Mar  1 10:30 ..

10.105.2.21
ls -la /mnt/storage1/images
total 48
drwxr-xr-x 12 root root 4096 Mar  1 03:39 .
drwxr-xr-x  8 root root 4096 Feb 28 20:44 ..
drwxr-----  2 root root 4096 Feb 28 20:44 130
drwxr-xr-x  2 root root 4096 Feb 28 20:54 131
drwxr-----  2 root root 4096 Feb 28 20:45 132
drwxr-----  2 root root 4096 Feb 28 20:45 133
drwxr-----  2 root root 4096 Feb 28 20:45 134
drwxr-----  2 root root 4096 Feb 28 21:11 390
drwxr-----  2 root root 4096 Feb 28 22:38 397
drwxr-----  2 root root 4096 Feb 28 22:38 499
drwxr-----  2 root root 4096 Feb 28 21:40 530
drwxr-----  2 root root 4096 Feb 28 21:11 811
Result: locked stopped VM without any disks on old node AND no the VM on new node.
Excluding resentment, I have 2 questions:
1) Why VM wasn't migrate?
2) What is the problem in the logic of proxmox when it erase all disks BEFORE get success confirmation from main Migration task ?
Its serious problem, you know. For now i scared to migrate any vm or lxc because without backup i can get erased VM from proxmox.

Migration task full log in attachments(migration.zip). Short part:
2021-03-01 00:20:26 starting migration of VM 101 to node 'gpsrv-06' (10.105.2.21)
2021-03-01 00:20:26 found local disk 'storage1:101/vm-101-disk-0.qcow2' (in current VM config)
2021-03-01 00:20:26 found local disk 'storage1:101/vm-101-disk-1.qcow2' (in current VM config)
2021-03-01 00:20:26 found local disk 'storage1:101/vm-101-disk-2.qcow2' (in current VM config)
2021-03-01 00:20:26 found local disk 'storage1:101/vm-101-disk-3.qcow2' (in current VM config)
2021-03-01 00:20:26 found local disk 'storage1:101/vm-101-disk-4.qcow2' (in current VM config)
2021-03-01 00:20:26 found local disk 'storage1:101/vm-101-disk-5.qcow2' (in current VM config)
2021-03-01 00:20:26 found local disk 'storage1:101/vm-101-disk-6.qcow2' (in current VM config)
2021-03-01 00:20:26 found local disk 'storage1:101/vm-101-disk-7.qcow2' (in current VM config)
2021-03-01 00:20:26 copying local disk images
2021-03-01 00:20:26 starting VM 101 on remote node 'gpsrv-06'
2021-03-01 00:20:43 start remote tunnel
2021-03-01 00:20:44 ssh tunnel ver 1
2021-03-01 00:20:44 starting storage migration
2021-03-01 00:20:44 scsi1: start migration to nbd:unix:/run/qemu-server/101_nbd.migrate:exportname=drive-scsi1
drive mirror is starting for drive-scsi1
drive-scsi1: transferred: 0 bytes remaining: 214748364800 bytes total: 214748364800 bytes progression: 0.00 % busy: 1 ready: 0
drive-scsi1: transferred: 132120576 bytes remaining: 214616244224 bytes total: 214748364800 bytes progression: 0.06 % busy: 1 ready: 0
...
2021-03-01 03:38:22 migration xbzrle cachesize: 2147483648 transferred 64336101 pages 144111 cachemiss 958002 overflow 1722
2021-03-01 03:38:23 migration speed: 1.38 MB/s - downtime 93 ms
2021-03-01 03:38:23 migration status: completed
drive-scsi5: transferred: 214778576896 bytes remaining: 0 bytes total: 214778576896 bytes progression: 100.00 % busy: 0 ready: 1
drive-scsi6: transferred: 2199023255552 bytes remaining: 0 bytes total: 2199023255552 bytes progression: 100.00 % busy: 0 ready: 1
drive-scsi3: transferred: 107374182400 bytes remaining: 0 bytes total: 107374182400 bytes progression: 100.00 % busy: 0 ready: 1
drive-scsi7: transferred: 107522359296 bytes remaining: 0 bytes total: 107522359296 bytes progression: 100.00 % busy: 0 ready: 1
drive-scsi0: transferred: 36333748224 bytes remaining: 0 bytes total: 36333748224 bytes progression: 100.00 % busy: 0 ready: 1
drive-scsi4: transferred: 536870912000 bytes remaining: 0 bytes total: 536870912000 bytes progression: 100.00 % busy: 0 ready: 1
drive-scsi2: transferred: 214748364800 bytes remaining: 0 bytes total: 214748364800 bytes progression: 100.00 % busy: 0 ready: 1
drive-scsi1: transferred: 214818029568 bytes remaining: 0 bytes total: 214818029568 bytes progression: 100.00 % busy: 0 ready: 1
all mirroring jobs are ready
drive-scsi5: Completing block job...
drive-scsi5: Completed successfully.
drive-scsi6: Completing block job...
drive-scsi6: Completed successfully.
drive-scsi3: Completing block job...
drive-scsi3: Completed successfully.
drive-scsi7: Completing block job...
drive-scsi7: Completed successfully.
drive-scsi0: Completing block job...
drive-scsi0: Completed successfully.
drive-scsi4: Completing block job...
drive-scsi4: Completed successfully.
drive-scsi2: Completing block job...
drive-scsi2: Completed successfully.
drive-scsi1: Completing block job...
drive-scsi1: Completed successfully.
drive-scsi5: Cancelling block job
drive-scsi6: Cancelling block job
drive-scsi3: Cancelling block job
drive-scsi7: Cancelling block job
drive-scsi0: Cancelling block job
drive-scsi4: Cancelling block job
drive-scsi2: Cancelling block job
drive-scsi1: Cancelling block job
drive-scsi5: Cancelling block job
drive-scsi6: Cancelling block job
drive-scsi3: Cancelling block job
drive-scsi7: Cancelling block job
drive-scsi0: Cancelling block job
drive-scsi4: Cancelling block job
drive-scsi2: Cancelling block job
drive-scsi1: Cancelling block job
2021-03-01 03:39:38 ERROR: Failed to complete storage migration: mirroring error: VM 101 not running
2021-03-01 03:39:38 ERROR: migration finished with problems (duration 03:19:12)
TASK ERROR: migration problems
1614582473300.png
Some common info:
Bash:
NODE 1 "VM migrate from this"

pveversion -v
proxmox-ve: 6.3-1 (running kernel: 5.4.78-2-pve)
pve-manager: 6.3-3 (running version: 6.3-3/eee5f901)
pve-kernel-5.4: 6.3-3
pve-kernel-helper: 6.3-3
pve-kernel-5.4.78-2-pve: 5.4.78-2
ceph-fuse: 12.2.11+dfsg1-2.1+b1
corosync: 3.1.0-pve1
criu: 3.11-3
glusterfs-client: 5.5-3
ifupdown: 0.8.35+pve1
libjs-extjs: 6.0.1-10
libknet1: 1.20-pve1
libproxmox-acme-perl: 1.0.7
libproxmox-backup-qemu0: 1.0.2-1
libpve-access-control: 6.1-3
libpve-apiclient-perl: 3.1-3
libpve-common-perl: 6.3-3
libpve-guest-common-perl: 3.1-4
libpve-http-server-perl: 3.1-1
libpve-storage-perl: 6.3-6
libqb0: 1.0.5-1
libspice-server1: 0.14.2-4~pve6+1
lvm2: 2.03.02-pve4
lxc-pve: 4.0.6-2
lxcfs: 4.0.6-pve1
novnc-pve: 1.1.0-1
proxmox-backup-client: 1.0.8-1
proxmox-mini-journalreader: 1.1-1
proxmox-widget-toolkit: 2.4-5
pve-cluster: 6.2-1
pve-container: 3.3-3
pve-docs: 6.3-1
pve-edk2-firmware: 2.20200531-1
pve-firewall: 4.1-3
pve-firmware: 3.2-1
pve-ha-manager: 3.1-1
pve-i18n: 2.2-2
pve-qemu-kvm: 5.1.0-8
pve-xtermjs: 4.7.0-3
qemu-server: 6.3-5
smartmontools: 7.1-pve2
spiceterm: 3.1-1
vncterm: 1.6-2
zfsutils-linux: 0.8.5-pve1

Bash:
NODE 2 "VM migrate to this"

proxmox-ve: 6.3-1 (running kernel: 5.4.98-1-pve)
pve-manager: 6.3-4 (running version: 6.3-4/0a38c56f)
pve-kernel-5.4: 6.3-5
pve-kernel-helper: 6.3-5
pve-kernel-5.4.98-1-pve: 5.4.98-1
ceph-fuse: 12.2.11+dfsg1-2.1+b1
corosync: 3.1.0-pve1
criu: 3.11-3
glusterfs-client: 5.5-3
ifupdown: 0.8.35+pve1
libjs-extjs: 6.0.1-10
libknet1: 1.20-pve1
libproxmox-acme-perl: 1.0.7
libproxmox-backup-qemu0: 1.0.3-1
libpve-access-control: 6.1-3
libpve-apiclient-perl: 3.1-3
libpve-common-perl: 6.3-4
libpve-guest-common-perl: 3.1-5
libpve-http-server-perl: 3.1-1
libpve-storage-perl: 6.3-7
libqb0: 1.0.5-1
libspice-server1: 0.14.2-4~pve6+1
lvm2: 2.03.02-pve4
lxc-pve: 4.0.6-2
lxcfs: 4.0.6-pve1
novnc-pve: 1.1.0-1
proxmox-backup-client: 1.0.8-1
proxmox-mini-journalreader: 1.1-1
proxmox-widget-toolkit: 2.4-5
pve-cluster: 6.2-1
pve-container: 3.3-4
pve-docs: 6.3-1
pve-edk2-firmware: 2.20200531-1
pve-firewall: 4.1-3
pve-firmware: 3.2-2
pve-ha-manager: 3.1-1
pve-i18n: 2.2-2
pve-qemu-kvm: 5.2.0-2
pve-xtermjs: 4.7.0-3
qemu-server: 6.3-5
smartmontools: 7.1-pve2
spiceterm: 3.1-1
vncterm: 1.6-2
zfsutils-linux: 2.0.3-pve1

Bash:
pvecm status

Cluster information
-------------------
Name:             GpsrvCluster
Config Version:   9
Transport:        knet
Secure auth:      on

Quorum information
------------------
Date:             Mon Mar  1 11:41:51 2021
Quorum provider:  corosync_votequorum
Nodes:            7
Node ID:          0x00000003
Ring ID:          1.83be
Quorate:          Yes

Votequorum information
----------------------
Expected votes:   7
Highest expected: 7
Total votes:      7
Quorum:           4
Flags:            Quorate

Membership information
----------------------
    Nodeid      Votes Name
0x00000001          1 10.105.2.27
0x00000002          1 192.168.240.82
0x00000003          1 10.105.2.29 (local)
0x00000004          1 10.105.2.23
0x00000005          1 10.105.2.11
0x00000006          1 10.105.2.13
0x00000007          1 10.105.2.21

Bash:
qm config 101
bootdisk: scsi0
cores: 12
description: %D0%9D%D0%B0%D0%B7%D0%BD%D0%B0%D1%87%D0%B5%D0%BD%D0%B8%D0%B5 %D0%92%D0%9C%3A %D0%9E%D1%81%D0%BD%D0%BE%D0%B2%D0%BD%D0%BE%D0%B9 GitLab%0A%D0%9F%D1%80%D0%BE%D0%B5%D0%BA%D1%82%3A Any%0A%D0%9E%D1%82%D0%B2%D0%B5%D1%82%D1%81%D1%82%D0%B2%D0%B5%D0%BD%D0%BD%D1%8B%D0%B9%3A %D0%9F%D0%B0%D0%B2%D0%BB%D0%BE%D0%B2 %D0%90%D0%BD%D0%B4%D1%80%D0%B5%D0%B9, %D0%A1%D0%BE%D0%BB%D0%B4%D0%B0%D1%82%D0%BE%D0%B2 %D0%94%D0%BC%D0%B8%D1%82%D1%80%D0%B8%D0%B9%0A%0AIP%3A 10.105.2.150%0A%D0%94%D0%BE%D1%81%D1%82%D1%83%D0%BF ssh%3A%0A1) %D1%81 10.105.2.98 %D0%BF%D0%BE ssh-%D0%BA%D0%BB%D1%8E%D1%87%D1%83
ide2: none,media=cdrom
memory: 16384
name: GitLab
net0: virtio=E6:6E:D4:05:44:53,bridge=vmbr010
net1: virtio=9E:D8:70:F2:B9:89,bridge=vmbr010
numa: 0
onboot: 1
ostype: l26
scsi0: storage1:101/vm-101-disk-0.qcow2,size=32G
scsi1: storage1:101/vm-101-disk-1.qcow2,size=200G
scsi2: storage1:101/vm-101-disk-2.qcow2,size=200G
scsi3: storage1:101/vm-101-disk-3.qcow2,size=100G
scsi4: storage1:101/vm-101-disk-4.qcow2,size=500G
scsi5: storage1:101/vm-101-disk-5.qcow2,size=200G
scsi6: storage1:101/vm-101-disk-6.qcow2,size=2T
scsi7: storage1:101/vm-101-disk-7.qcow2,size=100G
scsihw: virtio-scsi-pci
smbios1: uuid=020bf5e1-1b93-49cc-9a06-44e154cdecac
sockets: 1
 

Attachments

  • migration.zip
    257.2 KB · Views: 4
Last edited:
Please provide your storage config (/etc/pve/storage.cfg).
 
/etc/pve/storage.cfg
Bash:
dir: local
        path /var/lib/vz
        content iso
        prune-backups keep-all=1
        shared 0

dir: storage1
        path /mnt/storage1
        content snippets,vztmpl,rootdir,images,backup
        nodes gpsrv-01,gpsrv-82,gpsrv-02,gpsrv-db,gpsrv-hdd1,gpsrv-ssd,gpsrv-06
        prune-backups keep-all=1
        shared 0

dir: storage2
        path /mnt/storage2
        content iso,backup,images,rootdir,vztmpl,snippets
        nodes gpsrv-hdd1
        prune-backups keep-all=1
        shared 0

pbs: ProxmoxBackupServer
        datastore OnSchedule
        server gpsrv-backup
        content backup
        fingerprint 38:6e:5d:09:a2:23:0e:bc:3c:0c:18:5d:92:39:b1:50:05:d9:92:dd:c5:48:5a:4d:2b:7c:cb:65:5f:47:f0:01
        prune-backups keep-all=1
        username gpsrv-backuper@pbs

dir: storage3
        path /mnt/storage3
        content images,rootdir,iso,backup,snippets,vztmpl
        nodes gpsrv-hdd1
        prune-backups keep-all=1
        shared 0

Also, another copied VM had same problem too, but without erased disks. Attach log(migration_task_2.txt).
 

Attachments

  • migration_task_2.txt
    31.3 KB · Views: 6
Last edited:
Is storage1 by any chance a network storage? If so, which protocol (NFS, CIFS)?
 
Storage1 is just directory on all nodes.
Bash:
mount
/dev/sda2 on /boot type ext4 (rw,relatime,stripe=1024)
/dev/sda3 on /mnt/storage1 type ext4 (rw,relatime,stripe=256)
/dev/sda6 on / type ext4 (rw,relatime,errors=remount-ro,stripe=256)

But indeed this directory shared by NFS for another nodes outside cluster (i'm backuping all vm from node outside cluster to this one, and restore them. After that reinstalling node and add to cluster)
"/mnt/storage1 10.105.2.0/255.255.254.0(rw,no_root_squash,sync,no_subtree_check,insecure)"

upd: In case 2 in migration_task_2.txt when VM tried to migrate from gpsrv-01 to gpsrv-06 there was no any network storages on both nodes.
 
Last edited:
Please provide the VM config of VM 131 (qm config 131) and the CPU used on both the source and the target node.
 
Bash:
qm config 131
balloon: 0
boot: cdn
bootdisk: scsi0
cores: 4
description: 
ide2: none,media=cdrom
memory: 8192
name: k8s-master01
net0: virtio=1E:75:99:28:97:0C,bridge=vmbr010,firewall=1
numa: 0
onboot: 1
ostype: l26
scsi0: storage1:131/vm-131-disk-0.qcow2,format=qcow2,size=50G
scsihw: virtio-scsi-pci
smbios1: uuid=0c3b8cac-f29a-4352-a0e1-6c88d46ddc5b
sockets: 2

case 1(erased disks)
source node: Intel(R) Xeon(R) CPU E5-2699 v3 @ 2.30GHz
target node: Intel(R) Xeon(R) CPU E5-2670 0 @ 2.60GHz

case 2(only failed migration)
source node: Intel(R) Xeon(R) CPU E5-2670 0 @ 2.60GHz
target node: Intel(R) Xeon(R) CPU E5-2670 0 @ 2.60GHz
 
Is there anything in the guest's logs hinting at an error or why it has shut down?
Please also provide the syslog of the hosts (source and target) for the whole migration time frame of the 2nd one.
 
Last edited:
I could not find any reasons of fails in guest logs of 101 VM. "Guests" means logs of particular 101 VM or some systemd pve services/another?

logs.zip:

UPID:gpsrv-hdd1:0000A117:04159A02:603B37BD:qmrestore:101:root@pam: - restore from backup
UPID:gpsrv-hdd1:00001EAB:04260281:603B61BF:qmstart:101:root@pam: - start after restore
UPID:gpsrv-hdd1:0000DFAB:045C659C:603BECFA:qmigrate:101:root@pam: - night migration
UPID:gpsrv-hdd1:0000897E:04931DB6:603C790F:qmigrate:101:root@pam: - morning trying of migration

case 1(erased disks)
syslog_from_case1.txt
syslog_to_case1.txt

case 2(only failed migration)
syslog_from_case2.txt
syslog_to_case2.txt

p.s. upd #5 msg - there was wrong string about nfs. rewrited from existing settings /etc/exports.
 

Attachments

  • logs.zip
    265.4 KB · Views: 1
Last edited:
Thank you for the log, looks like the following is the problem:
Code:
Mar  1 03:38:23 gpsrv-hdd1 QEMU[7868]: Unexpected error in raw_reconfigure_getfd() at block/file-posix.c:1043:
Mar  1 03:38:23 gpsrv-hdd1 QEMU[7868]: kvm: Could not reopen file: No such file or directory
and
Code:
Mar  1 03:38:23 gpsrv-06 QEMU[62487]: kvm: Disconnect client, due to: Failed to read request: Unexpected end-of-file before all bytes

Can you try updating to pve-qemu-kvm 5.2.0-2 from pve-no-subscription? Afterwards you need to restart the VM for the new Qemu version to be used.
 
Can you try updating to pve-qemu-kvm 5.2.0-2 from pve-no-subscription? Afterwards you need to restart the VM for the new Qemu version to be used.
I'll try it with next VMs cause current VM doesnt have any disks.
So, to be clear:
1) The error has occurred because:
a) pve-qemu-kvm: 5.1.0-8 version bug ?
b) pve-qemu-kvm difference between two nodes ?
c) something else?

2) Is it normal behavior with deletion disks on both nodes due migration problem?
 
No, that is not. Looks like something else deletes the files.
This only happens on the node where the storage is also shared via NFS?
 
It was the first time when I get problem like this. As i wrote before - there was a case when migration also failed with similar errors instead of deleting disks. There was no any NFS/CIFS storages.
Maybe i have some hardware problems because destination node are the same. But all IPMI/iLO checks are passed, monitoring is OK and i dont have other problems with hypervisor.
 
Hi,
this is a rather nasty bug that's currently in our migration code, and it can only happen if the migration fails at a very specific point
EDIT: and only for disks that were not mirrored, but in your case all disks were mirrored, so it's actually not the issue I was thinking about.
So far, yours is the only report I'm aware of. A patch to fix it already exists (as part of a larger refactoring), but it wasn't applied yet. But It should find its way into a future release.
 
Last edited:
But indeed this directory shared by NFS for another nodes outside cluster (i'm backuping all vm from node outside cluster to this one, and restore them. After that reinstalling node and add to cluster)
"/mnt/storage1 10.105.2.0/255.255.254.0(rw,no_root_squash,sync,no_subtree_check,insecure)"

Was there ever a VM 101 you deleted on any of the external nodes (while the VM 101 within the cluster was present)? If so, could you provide the pveversion -v and /etc/pve/storage.cfg for that node?
 
Last edited:
If we talk about nodes outside the cluster with connection through NFS to cluster node with VM 101 - yes, few nodes actually.
I mean
outside node-1: VM 101 xxx
outside node-2: VM 101 yyy
cluster node: VM 101 zzz

But i reinstalled OS on them and cant give you any logs from that nodes.
Although config and pveversion was the same on all old outside nodes.

Bash:
pveversion -v
proxmox-ve: 5.1-38 (running kernel: 4.13.13-5-pve)
pve-manager: 5.1-43 (running version: 5.1-43/bdb08029)
pve-kernel-4.13.13-5-pve: 4.13.13-38
libpve-http-server-perl: 2.0-8
lvm2: 2.02.168-pve6
corosync: 2.4.2-pve3
libqb0: 1.0.1-1
pve-cluster: 5.0-19
qemu-server: 5.0-20
pve-firmware: 2.0-3
libpve-common-perl: 5.0-25
libpve-guest-common-perl: 2.0-14
libpve-access-control: 5.0-7
libpve-storage-perl: 5.0-17
pve-libspice-server1: 0.12.8-3
vncterm: 1.5-3
pve-docs: 5.1-16
pve-qemu-kvm: 2.9.1-6
pve-container: 2.0-18
pve-firewall: 3.0-5
pve-ha-manager: 2.0-4
ksm-control-daemon: not correctly installed
glusterfs-client: 3.8.8-1
lxc-pve: 2.1.1-2
lxcfs: 2.0.8-1
criu: 2.11.1-1~bpo90
novnc-pve: 0.6-4
smartmontools: 6.5+svn4324-1

Bash:
cat /etc/pve/storage.cfg
dir: local
        path /var/lib/vz
        content vztmpl,iso
        maxfiles 0
        shared 0


dir: storage1
        path /mnt/storage1
        content backup,images,rootdir
        maxfiles 10
        shared 1


dir: storage2
        path /mnt/storage2
        content rootdir,backup,images
        maxfiles 1
        shared 1


dir: storage3
        path /mnt/storage3
        content rootdir,images,backup
        maxfiles 1
        shared 1


nfs: NFS_to_gpsrvHDD1
        export /mnt/storage1/
        path /mnt/pve/NFS_to_gpsrvHDD1
        server 10.105.2.29
        content backup # I set backup role only.
        maxfiles 4
        nodes gpsrv-05
        options vers=3
 
Last edited:
If you destroy a VM 101 on the stand-alone node, the NFS storage will be scanned for orphaned disks of VM 101. Since the stand-alone node doesn't know the storage is in use by a cluster, it would think the disks belong to the VM it's supposed to destroy right now, and will delete them.

It's been like this for a long time and it is bad design from a user perspective. That's why removing such unreferenced disks by default will not happen anymore with PVE 7.0 (it's a breaking API change so we sadly have to wait for a major release to change it). But scanning the storage even if content is only backups can be considered a bug. I'll prepare a fix for that.

But that fix won't reach PVE 5.X because it's EOL since July 2020.
 
Thank you for explanation. But maybe it was not the problem which you described.

I mean - migration task of 101 VM started at night and throughout the night no one destroyed any VM on any nodes.
I could destroy another 101 VM on outside node, but early during the day. Is it somehow affected on furwer migration?
p.s. But few days ago I think that I captured that bug which you described :D Again my poor 101 VM suddenly lost all disks. In that time all disks were on file system as deleted files(lsof | grep deleted). I copyied them from /proc/PID/fd/ files back to qcow2.
p.p.s Perhaps the disks of 101 VMs were also removed by the time of the night migration, and therefore the migration task broke.
 
Last edited:
Thank you for explanation. But maybe it was not the problem which you described.

I mean - migration task of 101 VM started at night and throughout the night no one destroyed any VM on any nodes.
I could destroy another 101 VM on outside node, but early during the day. Is it somehow affected on furwer migration?
p.s. But few days ago I think that I captured that bug which you described :D Again my poor 101 VM suddenly lost all disks. In that time all disks were on file system as deleted files(lsof | grep deleted). I copyied them from /proc/PID/fd/ files back to qcow2.
p.p.s Perhaps the disks of 101 VMs were also removed by the time of the night migration, and therefore the migration task broke.
I'd suggest using a separate storage (maybe create a subdir and export that) where you store only the backups and not reference the storage that's used for the cluster's VM disks anymore. Then the external PVE won't wrongly find the disks that belong to cluster VMs anymore.

Or you can try and backport the patch if you have any experience with such things. Note that it hasn't been reviewed yet.
 
Last edited:
Thank you. I did same thing already - place nfs backup folder on same storage but in different path.
For now I just remember this bug and will wait for updates cause almost all VMs moved to the new cluster.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!