qm rescan seems to not check CEPH storages

VictorSTS

Renowned Member
Oct 7, 2019
486
146
63
Spain
We had a hardware issue in one 3 node cluster with both local and CEPH storages. We had to move some VMs around both from nodes and from storages. All VMs are running correctly, albeit in just two nodes until we get replacement hardware.

Somehow during this procedure (probably forgot to tick the "remove source" while moving disks among storages), there are some "orphan" disks on CEPH storage whose ID match the ID of a couple of VMs.

The modification timestamp (rbd info poolname/imagename) shows that they have not been modified for a couple of days. Still, I want to check the content of the disk, so I tried to use qm rescan --vmid VMID in order to add the disk to the VM config. Unfortunately, it does nothing.

Is that command supposed to scan CEPH storages for "missing/orphan" disks which may belong to a given VM?
If it is, why it isn't adding the disk to the VM config?

Thanks!
 
Could you provide the output of qm rescan --dryrun 1?
Please also provide the output of cat /etc/pve/storage.cfg.
 
Of course, he it goes:

Could you provide the output of qm rescan --dryrun 1?
qm rescan --dryrun 1 --vmid 2005 NOTE: running in dry-run mode, won't write changes out! rescan volumes...undefined

Please also provide the output of cat /etc/pve/storage.cfg.

cat /etc/pve/storage.cfg dir: local path /var/lib/vz content backup,vztmpl,iso lvmthin: local-lvm thinpool data vgname pve content rootdir,images rbd: ceph1 content rootdir,images krbd 0 pool ceph1 rbd: ceph2 content rootdir,images krbd 0 pool ceph2 rbd: ceph3 content images,rootdir krbd 0 pool ceph3

The "orphan" disk for VM 2005 is in storage "ceph3". By the way, this is v6.4 (pve-manager/6.4-13/9f411e79 (running kernel: 5.4.140-1-pve))
 
Please provide the output of rbd ls for pool ceph3.
And please also provide the output of pvesm list ceph3.
 
Here the are:

Code:
rbd ls ceph3
vm-2000-disk-0
vm-2000-disk-1
vm-2001-disk-0
vm-2002-disk-0
vm-2003-disk-0
vm-2003-disk-1
vm-2003-disk-2
vm-2003-disk-3
vm-2003-disk-4
vm-2004-disk-0
vm-2005-disk-1

Code:
 pvesm list ceph3
Volid                       Format  Type              Size VMID
ceph3:vm-2000-disk-0 raw     images     64424509440 2000
ceph3:vm-2000-disk-1 raw     images    418759311360 2000
ceph3:vm-2001-disk-0 raw     images    107374182400 2001
ceph3:vm-2002-disk-0 raw     images    107374182400 2002
ceph3:vm-2003-disk-0 raw     images     85899345920 2003
ceph3:vm-2003-disk-1 raw     images    483183820800 2003
ceph3:vm-2003-disk-2 raw     images    220117073920 2003
ceph3:vm-2003-disk-3 raw     images    332859965440 2003
ceph3:vm-2003-disk-4 raw     images    332859965440 2003
ceph3:vm-2004-disk-0 raw     images     68719476736 2004
ceph3:vm-2005-disk-1 raw     images    171798691840 2005

This is the curent VM config:

Code:
cat /etc/pve/qemu-server/2005.conf
agent: 1
boot: cdn
bootdisk: scsi0
cores: 4
cpu: host,flags=+md-clear;+pcid;+spec-ctrl;+ssbd;+hv-tlbflush;+aes
ide2: none,media=cdrom
memory: 16384
name: VM-SRVSQL
net0: virtio=62:B2:48:66:AA:74,bridge=vmbr0
numa: 1
ostype: win8
scsi0: local-lvm:vm-2005-disk-0,discard=on,size=160G,ssd=1
scsi1: local-lvm:vm-2005-disk-1,discard=on,size=60G,ssd=1
scsihw: virtio-scsi-pci
smbios1: uuid=OMITED
sockets: 2
startup: order=2

Thanks!
 
Last edited:
Code:
 pvesm list ceph3
Volid                       Format  Type              Size VMID
cephNavision:vm-2000-disk-0 raw     images     64424509440 2000
cephNavision:vm-2000-disk-1 raw     images    418759311360 2000
cephNavision:vm-2001-disk-0 raw     images    107374182400 2001
cephNavision:vm-2002-disk-0 raw     images    107374182400 2002
cephNavision:vm-2003-disk-0 raw     images     85899345920 2003
cephNavision:vm-2003-disk-1 raw     images    483183820800 2003
cephNavision:vm-2003-disk-2 raw     images    220117073920 2003
cephNavision:vm-2003-disk-3 raw     images    332859965440 2003
cephNavision:vm-2003-disk-4 raw     images    332859965440 2003
cephNavision:vm-2004-disk-0 raw     images     68719476736 2004
cephNavision:vm-2005-disk-1 raw     images    171798691840 2005
Seems there is something wrong here?
The storage ceph3 shows ceph3 as pool, but here it lists disks on cephNavision.
Why is that? There doesn't seem to be any mention of this in your storage config.
 
Sorry, that happens when I try to anonymize the output and don't pay enough attention on it. I mixed both names but both are exactly the same pool: ceph3 == cephNavision.

Edited the output above with corrected names like this:

Code:
 pvesm list ceph3
Volid                       Format  Type              Size VMID
ceph3:vm-2000-disk-0 raw     images     64424509440 2000
ceph3:vm-2000-disk-1 raw     images    418759311360 2000
ceph3:vm-2001-disk-0 raw     images    107374182400 2001
ceph3:vm-2002-disk-0 raw     images    107374182400 2002
ceph3:vm-2003-disk-0 raw     images     85899345920 2003
ceph3:vm-2003-disk-1 raw     images    483183820800 2003
ceph3:vm-2003-disk-2 raw     images    220117073920 2003
ceph3:vm-2003-disk-3 raw     images    332859965440 2003
ceph3:vm-2003-disk-4 raw     images    332859965440 2003
ceph3:vm-2004-disk-0 raw     images     68719476736 2004
ceph3:vm-2005-disk-1 raw     images    171798691840 2005
 
qm rescan --dryrun 1 --vmid 2005 NOTE: running in dry-run mode, won't write changes out! rescan volumes...undefined
Is that the exact output you get, or did you change anything there as well?
That 'undefined' shouldn't be there in the same line and I couldn't find anything in the code where this would be printed.
 
No idea where that "undefined" came from. Did it again on the server and no "undefined" word is present. The exact output is:

Code:
qm rescan --dryrun 1 --vmid 2005
NOTE: running in dry-run mode, won't write changes out!
rescan volumes...

Sorry for the confusion, mira.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!