[SOLVED] No CTs/VMs start after unplanned shutdown. /dev/pve/vm-100-disk-0 does not exist. LVS data corrupted?

begleysm

Member
Jun 1, 2018
15
4
8
48
FIXED: See solution here: https://forum.proxmox.com/threads/n...ot-exist-lvs-data-corrupted.86664/post-380359

Hello all,

I had a UPS go bad and my Proxmox machine shutdown incorrectly (power essentially just pulled). Now I've reboot it and I can't start any Containers or Virtual Machines.

When I run something like sudo lxc-start -n 117 -F -l DEBUG -o /tmp/lxc-117.log I get (some other INFO messages redacted to stay under the 15,000 word limit on posts):

lxc-start 117 20210327214616.161 INFO conf - conf.c:run_script_argv:356 - Executing script "/usr/share/lxc/hooks/lxc-pve-prestart-hook" for container "117", config section "lxc"
lxc-start 117 20210327214616.880 DEBUG conf - conf.c:run_buffer:326 - Script exec /usr/share/lxc/hooks/lxc-pve-prestart-hook 117 lxc pre-start with output: mount: special device /dev/pve/vm-117-disk-0 does not exist

lxc-start 117 20210327214616.880 DEBUG conf - conf.c:run_buffer:326 - Script exec /usr/share/lxc/hooks/lxc-pve-prestart-hook 117 lxc pre-start with output: command 'mount /dev/pve/vm-117-disk-0 /var/lib/lxc/117/rootfs//' failed: exit code 32

lxc-start 117 20210327214616.889 ERROR conf - conf.c:run_buffer:335 - Script exited with status 32
lxc-start 117 20210327214616.889 ERROR start - start.c:lxc_init:861 - Failed to run lxc.hook.pre-start for container "117"
lxc-start 117 20210327214616.889 ERROR start - start.c:__lxc_start:1944 - Failed to initialize container "117"
lxc-start 117 20210327214616.889 ERROR lxc_start - tools/lxc_start.c:main:330 - The container failed to start
lxc-start 117 20210327214616.889 ERROR lxc_start - tools/lxc_start.c:main:336 - Additional information can be obtained by setting the --logfile and --logpriority options

I think the most relevant info is that special device /dev/pve/vm-117-disk-0 does not exist

After doing some Googling and reading on these forums I've also run lsblk -f which gave
NAME FSTYPE LABEL UUID MOUNTPOINT
sda
├─sda1
├─sda2 vfat BF95-30FC
└─sda3 LVM2_member Ok85Dk-zsEx-oXEy-oOlv-Jqyk-Mokd-XPw0qv
├─pve-swap swap fa815645-5d25-4930-bfc0-23f320e7b81e [SWAP]
└─pve-root ext4 97d9a641-b756-46c2-99ba-388648f793d8 /

lvs -a
which gave
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
data pve twi---tz-- 1.09t
[data_tdata] pve Twi------- 1.09t
[data_tmeta] pve ewi------- 11.36g
[lvol0_pmspare] pve ewi------- 11.36g
root pve -wi-ao---- 96.00g
swap pve -wi-ao---- 8.00g
vm-100-disk-0 pve Vwi---tz-- 8.00g data
vm-101-disk-0 pve Vwi---tz-- 8.00g data
vm-102-disk-0 pve Vwi---tz-- 128.00g data
vm-103-disk-1 pve Vwi---tz-- 8.00g data
vm-104-disk-1 pve Vwi---tz-- 8.00g data
vm-105-disk-1 pve Vwi---tz-- 128.00g data
vm-106-disk-0 pve Vwi---tz-- 8.00g data
vm-107-disk-1 pve Vwi---tz-- 8.00g data
vm-108-disk-1 pve Vwi---tz-- 16.00g data
vm-109-disk-0 pve Vwi---tz-- 8.00g data
vm-110-disk-1 pve Vwi---tz-- 8.00g data
vm-111-disk-1 pve Vwi---tz-- 8.00g data
vm-112-disk-1 pve Vwi---tz-- 8.00g data
vm-113-disk-0 pve Vwi---tz-- 32.00g data
vm-114-disk-0 pve Vwi---tz-- 8.00g data
vm-115-disk-0 pve Vwi---tz-- 8.00g data
vm-116-disk-0 pve Vwi---tz-- 8.00g data
vm-117-disk-0 pve Vwi---tz-- 10.00g data
vm-118-disk-0 pve Vwi---tz-- 8.00g data
vm-119-disk-1 pve Vwi---tz-- 8.00g data
vm-120-disk-0 pve Vwi---tz-- 8.00g data
vm-121-disk-0 pve Vwi---tz-- 8.00g data
vm-122-disk-0 pve Vwi---tz-- 8.00g data
vm-123-disk-0 pve Vwi---tz-- 8.00g data
vm-124-disk-0 pve Vwi---tz-- 32.00g data

and vgs -a
which gave
VG #PV #LV #SN Attr VSize VFree
pve 1 28 0 wz--n- 1.23t 15.99g

I believe that it is telling/relevant that lvs -a essentially gives no information. Other folks' examples had lots of % usage data. Running lvs -av gives
LV VG #Seg Attr LSize Maj Min KMaj KMin Pool Origin Data% Meta% Move Cpy%Sync Log Convert LV UUID LProfile
data pve 1 twi---tz-- 1.09t -1 -1 -1 -1 AabVQ4-N9ne-c27m-G2Es-VKcq-5lIC-RP5tBE
[data_tdata] pve 1 Twi------- 1.09t -1 -1 -1 -1 IJmFPg-1djT-XdEZ-qvXC-FB1O-tGyr-bYxoAj
[data_tmeta] pve 1 ewi------- 11.36g -1 -1 -1 -1 xKoJ53-9iGB-6nm7-T06r-0CCe-SiIW-Gg3a9I
[lvol0_pmspare] pve 1 ewi------- 11.36g -1 -1 -1 -1 lNZ7ft-PkgT-KsZt-l5cQ-BEtk-9tSz-V2AAp4
root pve 1 -wi-ao---- 96.00g -1 -1 253 1 WtE4DK-KZdT-SKiW-Rqfa-UcgR-arIY-fn5JZN
swap pve 1 -wi-ao---- 8.00g -1 -1 253 0 uOOvkv-ZVVw-s416-1OZY-8r4c-UssV-c7MYji
vm-100-disk-0 pve 1 Vwi---tz-- 8.00g -1 -1 -1 -1 data ru8g6E-MPXK-m9M0-KPI2-2wxV-hFV5-uaAH6H
vm-101-disk-0 pve 1 Vwi---tz-- 8.00g -1 -1 -1 -1 data sEhxiu-fFqt-JxQn-86By-lFCn-n182-QIIu3v
vm-102-disk-0 pve 1 Vwi---tz-- 128.00g -1 -1 -1 -1 data EA4P3M-mf1Y-xyYj-7yrx-ZCrX-aEb1-n0MsBf
vm-103-disk-1 pve 1 Vwi---tz-- 8.00g -1 -1 -1 -1 data 6TdrnE-5ci4-wys0-NGqL-NL7c-mHc5-2P9pkG
vm-104-disk-1 pve 1 Vwi---tz-- 8.00g -1 -1 -1 -1 data gCjlNm-ILBw-fr6D-Iel3-Q9ec-2RcZ-7x0Bfm
vm-105-disk-1 pve 1 Vwi---tz-- 128.00g -1 -1 -1 -1 data E4i6xO-CtTz-Otul-EWTb-vaaY-1QOC-2gOyZI
vm-106-disk-0 pve 1 Vwi---tz-- 8.00g -1 -1 -1 -1 data xuODxA-XS6I-TuYJ-iwrp-67SB-f6kJ-P445Ev
vm-107-disk-1 pve 1 Vwi---tz-- 8.00g -1 -1 -1 -1 data bA97Tm-bXbD-LgZ3-B8fa-suTw-KeHk-xHRl4Y
vm-108-disk-1 pve 1 Vwi---tz-- 16.00g -1 -1 -1 -1 data fAjAod-6x6g-xQ5S-Dd4b-BC10-isiw-cI1Nft
vm-109-disk-0 pve 1 Vwi---tz-- 8.00g -1 -1 -1 -1 data X7TrO5-Gq8D-0jLP-aJ8j-0LO6-0OV4-KcEFfF
vm-110-disk-1 pve 1 Vwi---tz-- 8.00g -1 -1 -1 -1 data CjKN8o-0HIz-GxY0-ddL4-3Vuk-Qoqj-yB99Ag
vm-111-disk-1 pve 1 Vwi---tz-- 8.00g -1 -1 -1 -1 data NL9d3o-Nmoh-zrjc-6uhU-5mW2-qCKS-vwa8RH
vm-112-disk-1 pve 1 Vwi---tz-- 8.00g -1 -1 -1 -1 data 0jywWb-gw2m-haW5-Lwz9-T03A-EkrL-o0r42b
vm-113-disk-0 pve 1 Vwi---tz-- 32.00g -1 -1 -1 -1 data jkgK6e-6hfO-jYNZ-jZi4-0kzE-kMwL-2j3A2i
vm-114-disk-0 pve 1 Vwi---tz-- 8.00g -1 -1 -1 -1 data rxHUOd-Fx1G-35wT-hkdT-xgF9-AjII-9bfdam
vm-115-disk-0 pve 1 Vwi---tz-- 8.00g -1 -1 -1 -1 data 5URXuI-cj1Q-T0O3-rJMy-TTIg-rqPa-DB2BJr
vm-116-disk-0 pve 1 Vwi---tz-- 8.00g -1 -1 -1 -1 data J2BeJI-VcWi-Nwqk-ypNX-Zo2j-VcWr-RM0jAh
vm-117-disk-0 pve 1 Vwi---tz-- 10.00g -1 -1 -1 -1 data lbuRX1-O3Lf-Pexd-34fo-1sN0-Kjln-Vh6HZv
vm-118-disk-0 pve 1 Vwi---tz-- 8.00g -1 -1 -1 -1 data 1yBvBG-v5v9-z8st-w6XQ-3vyv-jwz6-wMmMkB
vm-119-disk-1 pve 1 Vwi---tz-- 8.00g -1 -1 -1 -1 data cVcTKP-sV6w-9m4J-QXMI-dg5I-xmk2-4SM6kO
vm-120-disk-0 pve 1 Vwi---tz-- 8.00g -1 -1 -1 -1 data PlD9qf-gz9W-mcm1-ecZh-xAh5-aFdT-65kQfo
vm-121-disk-0 pve 1 Vwi---tz-- 8.00g -1 -1 -1 -1 data dBnKj9-MQeV-2Q0i-Mmr7-KZFA-6NU6-t9LTtf
vm-122-disk-0 pve 1 Vwi---tz-- 8.00g -1 -1 -1 -1 data Z9DONH-G6TP-0TMd-3JcG-E0A4-4xov-5ViBOn
vm-123-disk-0 pve 1 Vwi---tz-- 8.00g -1 -1 -1 -1 data BYEMNb-dW4s-XHxv-dnkz-n2zG-3r2A-lSiOdv
vm-124-disk-0 pve 1 Vwi---tz-- 32.00g -1 -1 -1 -1 data lCVJRb-v7Nc-TRAa-5x41-qrtx-5YWW-3eG2qD

And so I think perhaps the "LVM Metadata" is corrupted? Now keep in mind, I'm amateur enough to not really even know what that means yet. A lot of solutions indicate that perhaps the meta data partition is out of space. I'm not sure if this is my problem. I did find these blog posts about resizing/repairing a thin pool, is it relevant?

Hoping to get some advice before diving in.

Also if it is relevant here is screenshot from the Web GUI of Node/Disks/LVM. It says Usage is 99.9%, but it also says that there is 15.99GiB free, so I'm a bit confused.
Screenshot_2021-03-27 geralt - Proxmox Virtual Environment.png

Can anyone shed some light on my issue?

Thank you so much in advance.

- Sean
 
Last edited:
Bump... Anyone out there?

I considered trying the advice at https://charlmert.github.io/blog/2017/06/15/lvm-metadata-repair/ but one of the first things it discusses is to run
lvdisplay /dev/mapper/pve-data and I don't even have a /dev/mapper/pve-data...

root@server:/dev/mapper# ls -la
total 0
drwxr-xr-x 2 root root 100 Mar 27 17:42 .
drwxr-xr-x 21 root root 4080 Mar 27 17:42 ..
crw------- 1 root root 10, 236 Mar 27 17:39 control
lrwxrwxrwx 1 root root 7 Mar 27 17:39 pve-root -> ../dm-1
lrwxrwxrwx 1 root root 7 Mar 27 17:39 pve-swap -> ../dm-0

EDIT: I figured out that my mapping is to pve/data instead of mapper/pve-data.
 
Last edited:
Next update... Read through (and started to implement recommendations) from both https://charlmert.github.io/blog/2017/06/15/lvm-metadata-repair/ and https://www.unixrealm.com/repair-a-thin-pool/ but both really focus on an LVM thin pool that has run out of space. In both cases they have data_tmeta mapped and I don't seem to and so I can't utilize their guides.

I stumbled across this guide too https://www.golinuxcloud.com/recove.../#Step_3_Restore_VG_to_recover_LVM2_partition. I found that I do have many (more than 100) backups at /etc/lvm.

But, alas, running vgcfgrestore --test -f /etc/lvm/archive/pve_04968-320891183.vg pve fails, producing the following output
TEST MODE: Metadata will NOT be updated and volumes will not be (de)activated.
Consider using option --force to restore Volume Group pve with thin volumes.
Restore failed.

Still hunting for a solution. I think, worst case scenario I can start over from scratch and restore all my CTs/VMs one at a time (I have 2x backups of each CT/VM on a network storage location).
 
FIXED!

Unfortunately I'm not precisely sure why...

The main issue appears to be that the pve/data thin pool meta data got corrupted. This meant I couldn't activate pve/data.

Earlier in the day I ran lvconvert --repair pve/data and it errored with a bad checksum.

Later in the day I reran it (with the intention of getting the error message) and it succeeded. I was then able to activate pve/data.

At this point I still couldn't start any CTs/VMs because I still couldn't activate any of the CT/VM thin volumes, but what I could do is restore from backup for each CT/VM, individually, from the WebGUI. This isn't the most elegant solution, and it may have been possible to clean up/restore the individual thin volumes, but in my case, no data was lost this way and I just wanted to be done with the problem.

Anyhow, good luck to anyone else running into this type of issue.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!