[SOLVED] Question - Not All VM HDs Visible With lvdisplay From Host?

MrJFA

New Member
Mar 21, 2025
5
0
1
I'm hoping that someone can help with a query.... I've a cluster of 3 v7.4 servers, using ceph for storage, and found that some of the running vm's disks are visible from the host, by using lvdisplay / lvs, but some are not. As far as I can tell, the configs are all identical, but there's obviously a reason. Can anyone explain the reason, or point me in the direction of some documentation, please?
 
Hi @bbgeek17

Of course, apologies. On one of the hosts ( though the "issue" is apparently on all three ), I have a number of running VM's:

Bash:
root     1175245       1 52  2024 ?        92-08:18:49 /usr/bin/kvm -id 143
root     1177481       1 99  2024 ?        180-08:02:15 /usr/bin/kvm -id 151
root     1181812       1 99  2024 ?        337-00:17:23 /usr/bin/kvm -id 152

I've cropped the full output, as there are quite a few, but of these three VM's, only the HD from one is visible with lvs / lvdisplay.

Code:
  vm-152-disk-0                                  pve                                       Vwi-a-tz-- 150.00g data        0.00                                   
  vm-174-disk-0                                  pve                                       Vwi-aotz-- 100.00g data        55.92                                 
  vm-190-disk-0                                  pve                                       Vwi-aotz--  64.00g data        16.13                                 
  vm-194-cloudinit                             pve                                       Vwi-aotz--   4.00m data        9.38                                   
  vm-194-disk-0                                  pve                                       Vwi-aotz-- 128.00g data        100.00

I'm just curious to know why this is?
 
Without further information, based on the "pve" group name, the disk that shows up is most likely located on local-lvm storage.
Other disks are possibly located on Ceph. Since Ceph rbd disks are _not_ LVM'ed, you shouldn't see them in "lvs" output.

To confirm this theory, run and examine the output of the following commands:
pvesm status
cat /etc/pve/storage.cfg
qm status [vm-id]
qm config [vm-id]

Cheers

PS you may see them in "lsblk" output. However, we don't use Ceph here and I have not confirmed this by experiment.


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
Last edited:
This was my understanding of the disks being on ceph, too, but the config of each of the vm's, is that they're all using the same location :(

so, this is the storage.cfg:

Code:
dir: local
    path /var/lib/vz
    content vztmpl,backup,iso

lvmthin: local-lvm
    thinpool data
    vgname pve
    content rootdir,images

lvmthin: pvedata01
    thinpool pvedata01_pool
    vgname pvedata01_vg
    content images
    nodes proxmox2

lvmthin: pvedata
    thinpool pvedata
    vgname pvedata
    content rootdir,images
    nodes proxmox1

rbd: storage
    content rootdir,images
    krbd 0
    pool storage

cifs: labstorage
    path /mnt/pve/labstorage
    server 10.0.70.52
    share lab
    content backup,iso,snippets,vztmpl
    nodes proxmox5
    prune-backups keep-all=1
    username appcheck

And this is the output of the commands ( I've grepped out scsi, as that's what they're all using ):

Code:
143
status: running
boot: order=scsi0;ide2;net0
scsi0: storage:vm-143-disk-0,size=150G
scsihw: virtio-scsi-pci

151
status: running
boot: order=scsi0;ide2;net0
scsi0: storage:vm-151-disk-0,size=150G
scsihw: virtio-scsi-pci

152
status: running
boot: order=scsi0;ide2;net0
scsi0: storage:vm-152-disk-0,size=150G
scsihw: virtio-scsi-pci

Code:
Name              Type     Status           Total            Used       Available        %
labstorage        cifs   disabled               0               0               0      N/A
local              dir     active        98497780        27241620        66206612   27.66%
local-lvm      lvmthin     active       832884736       203640317       629244418   24.45%
pvedata        lvmthin   disabled               0               0               0      N/A
pvedata01      lvmthin   disabled               0               0               0      N/A
storage            rbd     active      7069573077      6729395765       340177312   95.19%
 
vm-152-disk-0 pve Vwi-a-tz-- 150.00g data 0.00
Another possibility is that this is an orphan disk. It exists, it is in your "pve" LVM group. The "pve" LVM group is backing your local-lvm.
If you don't have an "unused" entry in your 152 config now, you will most likely see it after running "qm disk rescan --vmid 152"


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
Another possibility is that this is an orphan disk. It exists, it is in your "pve" LVM group. The "pve" LVM group is backing your local-lvm.
If you don't have an "unused" entry in your 152 config now, you will most likely see it after running "qm disk rescan --vmid 152"


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
Ah, I wonder if this could be it. If the disk was initially located on a different storage and then migrated, but without the original being deleted, could that be the cause?
 
Ah, I wonder if this could be it. If the disk was initially located on a different storage and then migrated, but without the original being deleted, could that be the cause?
Of course, that would be 100% in-line with what one would expect by not setting "delete after migration" option.
Per "man qm" / qm disk move :
--delete <boolean> (default = 0)
Delete the original disk after successful copy. By default the original disk is kept as unused disk.

There could be other reasons but it sounds like you have your "culprit".


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox