Failed to find logical volume "pve/vm-100-disk-0"

speedy7.1

New Member
Oct 4, 2023
12
0
1
Hi,

today in the morning all my VMs where not able to find their HDD to boot from, because the .

I'm facing an issue while starting the VM on the proxmox with the below error:
TASK ERROR: can't activate LV '/dev/pve/vm-100-disk-0': Failed to find logical volume "pve/vm-100-disk-0"

In the morning the storage with the VM disks were "Status: unknown", which I solved by readding it to the node. I tried to understand the issue and went through a couple of threads to understand the issue but we are not succeed with it.
Even a restore of a backup is not working and creates the following error:
TASK ERROR: command 'set -o pipefail && zstd -q -d -c /mnt/pve/backup/dump/vzdump-qemu-100-2023_08_05-00_00_02.vma.zst | vma extract -v -r /var/tmp/vzdumptmp27655.fifo - /var/tmp/vzdumptmp27655' failed: lvcreate 'pve/vm-100-disk-0' error: Volume group "pve" has insufficient free space (4094 extents): 17920 required.

So I think the disks are located on the storage, but they won't apear in the VM Disks list.

Here are a few information about the system
lvs:
Code:
  LV   VG  Attr       LSize    Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  data pve twi-a-tz-- <181.18g             0.00   0.89                          
  root pve -wi-ao----   69.50g                                                  
  swap pve -wi-ao----    8.00g

vgs:
Code:
  VG  #PV #LV #SN Attr   VSize   VFree
  pve   1   3   0 wz--n- 278.37g 15.99g

lsblk:
Code:
NAME               MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
sda                  8:0    0 278.9G  0 disk
├─sda1               8:1    0  1007K  0 part
├─sda2               8:2    0   512M  0 part
└─sda3               8:3    0 278.4G  0 part
  ├─pve-swap       253:0    0     8G  0 lvm  [SWAP]
  ├─pve-root       253:1    0  69.5G  0 lvm  /
  ├─pve-data_tmeta 253:2    0   1.9G  0 lvm
  │ └─pve-data     253:4    0 181.2G  0 lvm
  └─pve-data_tdata 253:3    0 181.2G  0 lvm
    └─pve-data     253:4    0 181.2G  0 lvm
sdb                  8:16   0 817.3G  0 disk
sr0                 11:0    1  1024M  0 rom

The content of the /etc/pve/storage.conf:
Code:
dir: local
        path /var/lib/vz
        content vztmpl,backup,iso

lvmthin: local-lvm
        thinpool data
        vgname pve
        content images,rootdir

nfs: backup
        export /volume1/Sicherung
        path /mnt/pve/backup
        server 10.59.0.15
        content backup
        prune-backups keep-daily=7,keep-last=3,keep-weekly=4

lvm: guests
        vgname pve
        content rootdir,images
        nodes JMSRVPRX
        shared 0

lvscan:
Code:
  ACTIVE            '/dev/pve/swap' [8.00 GiB] inherit
  ACTIVE            '/dev/pve/root' [69.50 GiB] inherit
  ACTIVE            '/dev/pve/data' [<181.18 GiB] inherit

Anyone got an idea, how I can bring up my VMs again?
Thank you very much.

Best regards.
 
Last edited:
It looks like your only have 16GB free disk space in the LVM volume group.
You are using lvmthin which allows to over-provision the VG.

What size does your disk of VM100 have, and how full is it?
If you don't have discard enabled on the VM disk, the underlying disk will fill up, even though you have deleted files inside the VM.

Please post the output of
Code:
cat /etc/pve/qemu-server/100.conf

To get the VM running again, I recommend you clear up enough space on the disk to start the VM.
Make sure you have discard enabled on the disk and run fstrim / to discard unused blocks on a mounted filesystem.
 
My VM 100 is 70GB big and I think 35GB full (Windows 2019 Std).
The output of cat /etc/pve/qemu-server/100.conf:
Code:
memory: 128

How can I clear up space on my storage without deleting too much?
Do you think, that I can run my other VMs again too after clearing enough space?
 
Last edited:
Output of: df -h
Code:
Filesystem                     Size  Used Avail Use% Mounted on
udev                            16G     0   16G   0% /dev
tmpfs                          3.2G   17M  3.2G   1% /run
/dev/mapper/pve-root            68G   35G   31G  53% /
tmpfs                           16G   43M   16G   1% /dev/shm
tmpfs                          5.0M     0  5.0M   0% /run/lock
tmpfs                           16G     0   16G   0% /sys/fs/cgroup
/dev/fuse                       30M   20K   30M   1% /etc/pve
10.59.0.15:/volume1/Sicherung  1.8T  1.8T   55G  98% /mnt/pve/backup
tmpfs                          3.2G     0  3.2G   0% /run/user/0

It is interesting that the volume of the VM disks (/dev/guests) is not listed.
 
Last edited:
"df" only shows _mounted_ disks, the LVM VM images are not mounted, they are passed through to KVM as raw volumes.
You can start with the following commands to examine your environment:
lsblk
pvs
pvdisplay
vgs
vgdisplay
lvs
lvdisplay

You cant just delete a file inside your VM and expect the space to free up all the way to parent VG/LV. The space is marked "free" inside the VM, but its not released by default in hopes that it can be quickly re-used for next allocation.
You want to start reading about "trim" - https://gist.github.com/hostberg/86bfaa81e50cc0666f1745e1897c0a56
Perhaps remove unneeded LV slices, or expand your VG with new physical disk.

A lesson from this - when using thin storage and over provisioning, have good alerting system in place and a plan to resolve the overprovisioning on the spot.

Good luck


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
  • Like
Reactions: s.lendl
lsblk and lvs are listed in my first post.

pvs
Code:
  Error reading device /dev/sdb at 0 length 512.
  Error reading device /dev/sdb at 0 length 4096.
  PV         VG  Fmt  Attr PSize   PFree
  /dev/sda3  pve lvm2 a--  278.37g 15.99g

pvdisplay
Code:
  Error reading device /dev/sdb at 0 length 512.
  Error reading device /dev/sdb at 0 length 4096.
  --- Physical volume ---
  PV Name               /dev/sda3
  VG Name               pve
  PV Size               278.37 GiB / not usable 2.98 MiB
  Allocatable           yes
  PE Size               4.00 MiB
  Total PE              71263
  Free PE               4094
  Allocated PE          67169
  PV UUID               hw6V6r-yxsc-KVt9-ddlI-oIIp-l2ng-KElwYJ

vgs
Code:
  VG  #PV #LV #SN Attr   VSize   VFree
  pve   1   3   0 wz--n- 278.37g 15.99g

vgdisplay
Code:
  --- Volume group ---
  VG Name               pve
  System ID            
  Format                lvm2
  Metadata Areas        1
  Metadata Sequence No  7
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                3
  Open LV               2
  Max PV                0
  Cur PV                1
  Act PV                1
  VG Size               278.37 GiB
  PE Size               4.00 MiB
  Total PE              71263
  Alloc PE / Size       67169 / <262.38 GiB
  Free  PE / Size       4094 / 15.99 GiB
  VG UUID               JU5Xh6-V8O4-Pxmd-ml8d-J3HN-OYay-Q4wwkY

lvdisplay
Code:
  --- Logical volume ---
  LV Path                /dev/pve/swap
  LV Name                swap
  VG Name                pve
  LV UUID                l9IyMT-lzuK-HhtH-KoGg-L0OS-bGWw-TIi7Bp
  LV Write Access        read/write
  LV Creation host, time proxmox, 2020-12-26 13:45:13 +0100
  LV Status              available
  # open                 2
  LV Size                8.00 GiB
  Current LE             2048
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:0
 
  --- Logical volume ---
  LV Path                /dev/pve/root
  LV Name                root
  VG Name                pve
  LV UUID                9SR0mk-YqDJ-7FQf-tCIo-pTEx-f8Rs-m2Zhy8
  LV Write Access        read/write
  LV Creation host, time proxmox, 2020-12-26 13:45:13 +0100
  LV Status              available
  # open                 1
  LV Size                69.50 GiB
  Current LE             17792
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:1
 
  --- Logical volume ---
  LV Name                data
  VG Name                pve
  LV UUID                BgQEfn-lbYk-qmzp-JO35-ffel-WVhW-6RH7Iy
  LV Write Access        read/write
  LV Creation host, time proxmox, 2020-12-26 13:45:13 +0100
  LV Pool metadata       data_tmeta
  LV Pool data           data_tdata
  LV Status              available
  # open                 0
  LV Size                <181.18 GiB
  Allocated pool data    0.00%
  Allocated metadata     0.89%
  Current LE             46381
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:4

I think I should have had enough free space:2023-10-04 16_24_24.png
 
Last edited:
did you redact the vm disks from your previous lsblk?
 
Did you check the "Discard" Checkbox under the Hardware tab for your Hard disks?

This allows your system to forward thin provisioning and give back free space. You can still check it now but for it to have effect you have to restart the vm.
1696430334001.png
 
no, I don't think so. What do I need to do exactly to redact the vm disks?
my current lsblk looks like this:

Code:
NAME               MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
sda                  8:0    0 278.9G  0 disk
├─sda1               8:1    0  1007K  0 part
├─sda2               8:2    0   512M  0 part
└─sda3               8:3    0 278.4G  0 part
  ├─pve-swap       253:0    0     8G  0 lvm  [SWAP]
  ├─pve-root       253:1    0  69.5G  0 lvm  /
  ├─pve-data_tmeta 253:2    0   1.9G  0 lvm 
  │ └─pve-data     253:4    0 181.2G  0 lvm 
  └─pve-data_tdata 253:3    0 181.2G  0 lvm 
    └─pve-data     253:4    0 181.2G  0 lvm 
sdb                  8:16   0 817.3G  0 disk
sr0                 11:0    1  1024M  0 rom
 
If this is your only vm and something is occupying space but you have no idea what, you could try to find with
Code:
apt install ncdu
cd /
ncdu -x

BUT BE CAREFUL: If you delete anything there, there is no recovery! Double check before you delete and consider backing it up to an other storage!
 
I've got three VMs running on my node.
I tried to restore VM100. VM101 and VM102 are untouched yet, but not running either.
 
If it is ok I would like to see the config files of this vms. You can get them with cat /etc/pve/qemu-server/<vmid>.conf.
 
Sure, I'm thankful for any help:
cat /etc/pve/qemu-server/100.conf
Code:
memory: 128
cat /etc/pve/qemu-server/101.conf
Code:
agent: 1
boot: order=ide0;ide2;net0
cores: 2
ide0: guests:vm-101-disk-0,size=70G
ide2: none,media=cdrom
memory: 8192
name: JMSRVETC
net0: e1000=0E:D2:53:90:66:7A,bridge=vmbr0,firewall=1
numa: 0
onboot: 1
ostype: win10
scsihw: virtio-scsi-pci
smbios1: uuid=88367f84-4211-4d2b-a683-ab818f0ebb07
sockets: 1
unused0: guests:vm-101-disk-1
vmgenid: 203b43f8-e78e-4ae4-b360-d4fc5bc9dc22
cat /etc/pve/qemu-server/102.conf
Code:
agent: 1
boot: order=ide0;ide2;net0
cores: 4
ide0: guests:vm-102-disk-0,size=70G
ide1: guests:vm-102-disk-1,size=100G
ide2: none,media=cdrom
memory: 16384
name: JMSRVINSOFT
net0: e1000=FE:EB:17:5F:0F:BB,bridge=vmbr0,firewall=1
numa: 0
onboot: 1
ostype: win10
scsihw: virtio-scsi-pci
smbios1: uuid=6fa5abf8-0654-4adc-a49a-63540bc4aee3
sockets: 1
vmgenid: 8a39fb98-66fd-4d9e-9b91-064eb48497bf

As mentioned: I tried to restore the backup of ID100, so there is no config atm.
 
There is something seriously wrong with your setup. The VMs 102, and 101 seem to occupy an total of 240 GB storage on your Volume Groups 'pve'. However the Logical Volumes do not seem to exist. Can you start this vms?

There is also an device 'sdb' listed in which seems to not function. Did you recently remove an hard disk? What role did that hard disk play?

Also it would help if you could attach your jounal:
journalctl --since '2023-10-03' > $(hostname)-journal.txt
 
No I cannot start the VMs.

I replaced a hard drive at the end of June that was marked as defective by the SMART.

Here is a link to the journey: (upload limit exceeded here)
https://we.tl/t-owwgB7jHbf
Unfortunately, I rebooted the host yesterday, so the journal will not start until 2023-10-04 10:05:03 CEST.
 
Last edited:
I am not entirely sure what happed but I think you system lost the disk '/dev/sdb'.

Can you post the output of qm start 102?
 
Here is the output of[ ICODE] qm Start 102[/ICODE]:
Code:
can't activate LV '/dev/pve/vm-102-disk-0':   Failed to find logical volume "pve/vm-102-disk-0"
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!