[SOLVED] VM won't start after reboot. Could not open disks in '/dev/vgrp' - QEMU exit code 1

n0rtpeak

New Member
Apr 10, 2022
5
0
1
Hello!

I don't know what's happened to Proxmox but after simple reboot my main VM (with TrueNAS) couldn't start.

Error message:
Code:
kvm: -drive file=/dev/vgrp/vm-100-disk-0,if=none,id=drive-scsi1,format=raw,cache=none,aio=io_uring,detect-zeroes=on: Could not open '/dev/vgrp/vm-100-disk-0': No such file or directory
TASK ERROR: start failed: QEMU exited with code 1

I've checked /dev/vgrp through shell and it doesn't exist.

pveversion:
Code:
pveversion -v
proxmox-ve: 7.1-1 (running kernel: 5.13.19-2-pve)
pve-manager: 7.1-7 (running version: 7.1-7/df5740ad)
pve-kernel-helper: 7.1-6
pve-kernel-5.13: 7.1-5
pve-kernel-5.13.19-2-pve: 5.13.19-4
ceph-fuse: 15.2.15-pve1
corosync: 3.1.5-pve2
criu: 3.15-1+pve-1
glusterfs-client: 9.2-1
ifupdown2: 3.1.0-1+pmx3
ksm-control-daemon: 1.4-1
libjs-extjs: 7.0.0-1
libknet1: 1.22-pve2
libproxmox-acme-perl: 1.4.0
libproxmox-backup-qemu0: 1.2.0-1
libpve-access-control: 7.1-5
libpve-apiclient-perl: 3.2-1
libpve-common-perl: 7.0-14
libpve-guest-common-perl: 4.0-3
libpve-http-server-perl: 4.0-4
libpve-storage-perl: 7.0-15
libspice-server1: 0.14.3-2.1
lvm2: 2.03.11-2.1
lxc-pve: 4.0.11-1
lxcfs: 4.0.11-pve1
novnc-pve: 1.2.0-3
proxmox-backup-client: 2.1.2-1
proxmox-backup-file-restore: 2.1.2-1
proxmox-mini-journalreader: 1.3-1
proxmox-widget-toolkit: 3.4-4
pve-cluster: 7.1-2
pve-container: 4.1-2
pve-docs: 7.1-2
pve-edk2-firmware: 3.20210831-2
pve-firewall: 4.2-5
pve-firmware: 3.3-3
pve-ha-manager: 3.3-1
pve-i18n: 2.6-2
pve-qemu-kvm: 6.1.0-3
pve-xtermjs: 4.12.0-1
qemu-server: 7.1-4
smartmontools: 7.2-1
spiceterm: 3.2-2
swtpm: 0.7.0~rc1+2
vncterm: 1.7-1
zfsutils-linux: 2.1.1-pve3

lvm vgdisplay:

Code:
lvm vgdisplay
  --- Volume group ---
  VG Name               pve
  System ID            
  Format                lvm2
  Metadata Areas        1
  Metadata Sequence No  32
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                7
  Open LV               2
  Max PV                0
  Cur PV                1
  Act PV                1
  VG Size               <931.01 GiB
  PE Size               4.00 MiB
  Total PE              238338
  Alloc PE / Size       234244 / <915.02 GiB
  Free  PE / Size       4094 / 15.99 GiB
  VG UUID               cBCMUt-ejS2-sfyN-xfb4-tvez-fq3A-sNk6jj
 
  --- Volume group ---
  VG Name               vgrp
  System ID            
  Format                lvm2
  Metadata Areas        4
  Metadata Sequence No  28
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                5
  Open LV               0
  Max PV                0
  Cur PV                4
  Act PV                4
  VG Size               14.55 TiB
  PE Size               4.00 MiB
  Total PE              3815444
  Alloc PE / Size       3801146 / 14.50 TiB
  Free  PE / Size       14298 / 55.85 GiB
  VG UUID               2mes6n-21ND-OplO-d0ye-PALR-S3HR-SV0fga

Thank you in advance!
 
Hi,
does the volume show up in the output of lvs? Please also share your /etc/pve/storage.cfg.
 
Hi Fabian!

Thank you for your reply.

storage.cfg:
Code:
  GNU nano 5.4   /etc/pve/storage.cfg                                                                                                              
dir: local
        path /var/lib/vz
        content backup,vztmpl,iso

lvmthin: local-lvm
        thinpool data
        vgname pve
        content images,rootdir

lvmthin: thpl
        thinpool thpl
        vgname vgrp
        content images,rootdir

Yes, lvs show up this volume group:

Code:
lvm> vgs
  VG   #PV #LV #SN Attr   VSize    VFree
  pve    1   7   0 wz--n- <931.01g 15.99g
  vgrp   4   5   0 wz--n-   14.55t 55.85g

Any ideas? Thank you in advance!
 
Last edited:
Yes, lvs show up this volume group:

Code:
lvm> vgs
  VG   #PV #LV #SN Attr   VSize    VFree
  pve    1   7   0 wz--n- <931.01g 15.99g
  vgrp   4   5   0 wz--n-   14.55t 55.85g

Any ideas? Thank you in advance!
Please also post the output of lvs (not vgs).
 
Oh sorry, I mixed it up. LVS output:

Code:
lvs
  LV            VG   Attr       LSize    Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  data          pve  twi-aotz-- <794.79g             6.95   0.56                           
  root          pve  -wi-ao----   96.00g                                                   
  swap          pve  -wi-ao----    8.00g                                                   
  vm-100-disk-0 pve  Vwi-a-tz--   80.00g data        3.31                                   
  vm-100-disk-1 pve  Vwi-a-tz--    4.00m data        14.06                                 
  vm-101-disk-0 pve  Vwi-a-tz--  500.00g data        10.51                                 
  vm-101-disk-1 pve  Vwi-a-tz--    4.00m data        14.06                                 
  thpl          vgrp twi-aotz--   14.50t             39.79  29.79                           
  vm-100-disk-0 vgrp Vwi-a-tz--   <3.91t thpl        35.94                                 
  vm-100-disk-1 vgrp Vwi-a-tz--   <3.91t thpl        35.94                                 
  vm-100-disk-2 vgrp Vwi-a-tz--   <3.91t thpl        37.91                                 
  vm-100-disk-3 vgrp Vwi-a-tz--   <3.91t thpl        37.91
 
Code:
  thpl          vgrp twi-aotz--   14.50t             39.79  29.79                          
  vm-100-disk-0 vgrp Vwi-a-tz--   <3.91t thpl        35.94
Well, the disk definitely seems to be there and it's also active. But maybe there went something wrong with the device link. Please check ls -l /dev/vgrp and try running vgscan --mknodes and see if the link appears then.
 
  • Like
Reactions: n0rtpeak
Many thanks Fabian!!! vgscan --mknodes do the work)

Do you have any idea why this might have happened?
 
Many thanks Fabian!!! vgscan --mknodes do the work)

Do you have any idea why this might have happened?
Not really. Just a wild guess, but maybe some (non-critical) error while LVM was initializing? Anything interesting in /var/log/syslog?
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!