Proxmox 6.2-4 Live migration

mfa2004

Renowned Member
Feb 20, 2009
20
0
66
Prior to upgrading to 6.2.4, Live migration using a Ceph storage backend, worked like a charm. Recently, I have migrated to Proxmox 6.2-4, and practically everything is working like a charm .... except for the live migration. I get the following error:

2020-05-15 12:07:17 ERROR: Failed to sync data - rbd error: rbd: listing images failed: (2) No such file or directory
2020-05-15 12:07:17 aborting phase 1 - cleanup resources
2020-05-15 12:07:17 ERROR: migration aborted (duration 00:00:09): Failed to sync data - rbd error: rbd: listing images failed: (2) No such file or directory
TASK ERROR: migration aborted

Frankly, I am at a loss as to what to check to figure out where the problem may lie. I hope someone can help. Thanks a bunch in advance!

Regards,

Mario
 
what's your pveversion -v on the source and target nodes? from which version did you upgrade?

does pvesm status and pvesm list STORAGE work for each STORAGE that uses ceph on both nodes?
 
source node (pveversion -v):

roxmox-ve: 6.2-1 (running kernel: 5.4.34-1-pve)
pve-manager: 6.2-4 (running version: 6.2-4/9824574a)
pve-kernel-5.4: 6.2-1
pve-kernel-helper: 6.2-1
pve-kernel-5.3: 6.1-6
pve-kernel-5.4.34-1-pve: 5.4.34-2
pve-kernel-5.3.18-3-pve: 5.3.18-3
pve-kernel-4.15: 5.3-3
pve-kernel-4.15.18-12-pve: 4.15.18-35
ceph: 14.2.9-pve1
ceph-fuse: 14.2.9-pve1
corosync: 3.0.3-pve1
criu: 3.11-3
glusterfs-client: 5.5-3
ifupdown: 0.8.35+pve1
ksm-control-daemon: 1.3-1
libjs-extjs: 6.0.1-10
libknet1: 1.15-pve1
libproxmox-acme-perl: 1.0.3
libpve-access-control: 6.1-1
libpve-apiclient-perl: 3.0-3
libpve-common-perl: 6.1-2
libpve-guest-common-perl: 3.0-10
libpve-http-server-perl: 3.0-5
libpve-storage-perl: 6.1-7
libqb0: 1.0.5-1
libspice-server1: 0.14.2-4~pve6+1
lvm2: 2.03.02-pve4
lxc-pve: 4.0.2-1
lxcfs: 4.0.3-pve2
novnc-pve: 1.1.0-1
proxmox-mini-journalreader: 1.1-1
proxmox-widget-toolkit: 2.2-1
pve-cluster: 6.1-8
pve-container: 3.1-5
pve-docs: 6.2-4
pve-edk2-firmware: 2.20200229-1
pve-firewall: 4.1-2
pve-firmware: 3.1-1
pve-ha-manager: 3.0-9
pve-i18n: 2.1-2
pve-qemu-kvm: 5.0.0-2
pve-xtermjs: 4.3.0-1
qemu-server: 6.2-2
smartmontools: 7.1-pve2
spiceterm: 3.1-1
vncterm: 1.6-1
zfsutils-linux: 0.8.3-pve1

pveversion -v (destination node):
proxmox-ve: 6.2-1 (running kernel: 5.4.34-1-pve)
pve-manager: 6.2-4 (running version: 6.2-4/9824574a)
pve-kernel-5.4: 6.2-1
pve-kernel-helper: 6.2-1
pve-kernel-5.3: 6.1-6
pve-kernel-5.4.34-1-pve: 5.4.34-2
pve-kernel-5.3.18-3-pve: 5.3.18-3
pve-kernel-4.13.13-2-pve: 4.13.13-33
ceph: 14.2.9-pve1
ceph-fuse: 14.2.9-pve1
corosync: 3.0.3-pve1
criu: 3.11-3
glusterfs-client: 5.5-3
ifupdown: 0.8.35+pve1
ksm-control-daemon: 1.3-1
libjs-extjs: 6.0.1-10
libknet1: 1.15-pve1
libproxmox-acme-perl: 1.0.3
libpve-access-control: 6.1-1
libpve-apiclient-perl: 3.0-3
libpve-common-perl: 6.1-2
libpve-guest-common-perl: 3.0-10
libpve-http-server-perl: 3.0-5
libpve-storage-perl: 6.1-7
libqb0: 1.0.5-1
libspice-server1: 0.14.2-4~pve6+1
lvm2: 2.03.02-pve4
lxc-pve: 4.0.2-1
lxcfs: 4.0.3-pve2
novnc-pve: 1.1.0-1
proxmox-mini-journalreader: 1.1-1
proxmox-widget-toolkit: 2.2-1
pve-cluster: 6.1-8
pve-container: 3.1-5
pve-docs: 6.2-4
pve-edk2-firmware: 2.20200229-1
pve-firewall: 4.1-2
pve-firmware: 3.1-1
pve-ha-manager: 3.0-9
pve-i18n: 2.1-2
pve-qemu-kvm: 5.0.0-2
pve-xtermjs: 4.3.0-1
qemu-server: 6.2-2
smartmontools: 7.1-pve2
spiceterm: 3.1-1
vncterm: 1.6-1
zfsutils-linux: 0.8.3-pve1

Both nodes were upgraded from Proxmox 6.1-6.

Output of pvesm status for both nodes:

Name Type Status Total Used Available %
images_ct rbd active 138709790761 43228991529 95480799232 31.17%
images_vm rbd active 138709790761 43228991529 95480799232 31.17%
local dir active 56762512 16284416 37565024 28.69%
local-lvm lvmthin active 146685952 0 146685952 0.00%
 
and 'pvesm list images_vm' ? how does /etc/pve/storage.cfg look?
 
unfortunately pvesm list images_vm gives an error:

rbd error: rbd: listing images failed: (2) No such file or directory

/etc/pve/storage.cfg :

dir: local
path /var/lib/vz
content vztmpl,backup,images,iso
maxfiles 2
shared 0

lvmthin: local-lvm
thinpool data
vgname pve
content images,rootdir

rbd: images_vm
content images
krbd 0
pool images

rbd: images_ct
content rootdir
krbd 1
pool images

Please note that VMs on both hosts can be controlled and run ....

Thanks
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!