Moving disk from Ceph to Local hangs

Hi,

I'm trying to move back some VM from Ceph to Local (same node).
That's done with move disk option, right?

When I move the disk, it gets created in local, but the the process hangs at 100%, and the VM keeps using the one from Ceph.
I cannot delete the disks created in local: "a vm with vmid exists".

Is there something wrong?

Kernel Version

Linux 5.15.39-3-pve #2 SMP PVE 5.15.39-3 (Wed, 27 Jul 2022 13:45:39 +0200)
PVE Manager Version

pve-manager/7.2-7/d0dd0e85
 
Hi,
please post the output of pveversion -v and qm config <ID> as well as the complete task log for the Move Disk operation. Are there any errors in /var/log/syslog or with ceph -w during the operation?

To get rid of the left-over disk, you can use qm rescan --vmid <ID> on the CLI and then remove the unused disk from the VM's Hardware tab in the UI.
 
Hi,

I actually already updated the packages, ceph was old 16.x when this happened.

Code:
# pveversion -v
proxmox-ve: 7.2-1 (running kernel: 5.15.39-3-pve)
pve-manager: 7.2-7 (running version: 7.2-7/d0dd0e85)
pve-kernel-5.15: 7.2-9
pve-kernel-helper: 7.2-9
pve-kernel-5.13: 7.1-9
pve-kernel-5.15.39-4-pve: 5.15.39-4
pve-kernel-5.15.39-3-pve: 5.15.39-3
pve-kernel-5.13.19-6-pve: 5.13.19-15
pve-kernel-5.13.19-2-pve: 5.13.19-4
ceph: 17.2.1-pve1
ceph-fuse: 17.2.1-pve1
corosync: 3.1.5-pve2
criu: 3.15-1+pve-1
glusterfs-client: 9.2-1
ifupdown2: 3.1.0-1+pmx3
ksm-control-daemon: 1.4-1
libjs-extjs: 7.0.0-1
libknet1: 1.24-pve1
libproxmox-acme-perl: 1.4.2
libproxmox-backup-qemu0: 1.3.1-1
libpve-access-control: 7.2-4
libpve-apiclient-perl: 3.2-1
libpve-common-perl: 7.2-2
libpve-guest-common-perl: 4.1-2
libpve-http-server-perl: 4.1-3
libpve-storage-perl: 7.2-8
libspice-server1: 0.14.3-2.1
lvm2: 2.03.11-2.1
lxc-pve: 5.0.0-3
lxcfs: 4.0.12-pve1
novnc-pve: 1.3.0-3
openvswitch-switch: 2.15.0+ds1-2+deb11u1
proxmox-backup-client: 2.2.5-1
proxmox-backup-file-restore: 2.2.5-1
proxmox-mini-journalreader: 1.3-1
proxmox-widget-toolkit: 3.5.1
pve-cluster: 7.2-2
pve-container: 4.2-2
pve-docs: 7.2-2
pve-edk2-firmware: 3.20220526-1
pve-firewall: 4.2-5
pve-firmware: 3.5-1
pve-ha-manager: 3.4.0
pve-i18n: 2.7-2
pve-qemu-kvm: 7.0.0-2
pve-xtermjs: 4.16.0-1
qemu-server: 7.2-4
smartmontools: 7.2-pve3
spiceterm: 3.2-2
swtpm: 0.7.1~bpo11+1
vncterm: 1.7-1
zfsutils-linux: 2.1.5-pve1

I actually solved: it only continues to log "transferred 100%" for a couple of minutes before starting to delete the old disk.
I also properly removed the ghost disks.
I also do not have the old logs at hand...

Thank you very much
 
I actually solved: it only continues to log "transferred 100%" for a couple of minutes before starting to delete the old disk.

I also properly removed the ghost disks.
Glad to hear :) So likely it just needed time to flush everything/finish the operation. Out of curiosity: Was the VM running during the move? How large is the disk?
 
Do you have a cache setting active for the disk? I tried to reproduce the issue here and that causes high IO wait for me.