[SOLVED] LXC suddenly wont boot - Failed to run lxc.hook.pre-start

Stacker

Member
May 6, 2020
23
0
21
I have had this LXC running for months and suddenly it drops offline and then I cant boot it. I ran some investigations and there is no file for the drive in /dev/pve however it does show up under pve display. I'm not fully aware how this works does it only mount here when booted? I know I have the disk.... Im going to attach some files here that I see other post.

I cant mount it manually.... BUT I do see the LV ... I have attached my config... a Debug on the boot.. and a File that has a few reads of various commands. Any help greatly appreciated.

pct mount 103
Code:
root@node01:/var/lib# pct mount 103
mount: /var/lib/lxc/103/rootfs: special device /dev/pve/vm-103-disk-0 does not exist.
mounting container failed
command 'mount -o noatime /dev/pve/vm-103-disk-0 /var/lib/lxc/103/rootfs//' failed: exit code 32

/dev/pve area
Code:
#    /dev/pve area
root@node01:/dev/pve# ls
vm-100-disk-0  vm-102-disk-0  vm-105-disk-0  vm-107-disk-0  vm-109-disk-0  vm-110-disk-1  vm-112-disk-0
vm-101-disk-0  vm-104-disk-0  vm-106-disk-0  vm-108-disk-0  vm-110-disk-0  vm-111-disk-0  vm-113-disk-0

lvdisplay
Code:
--- Logical volume ---
  LV Path                /dev/pve/vm-103-disk-0
  LV Name                vm-103-disk-0
  VG Name                pve
  LV UUID                kGrVdT-kqcD-kvD5-eXZf-u2g6-pKw9-xaPE2J
  LV Write Access        read/write
  LV Creation host, time node01, 2019-12-07 20:31:24 +0000
  LV Pool name           data
  LV Status              available
  # open                 0
  LV Size                25.00 GiB
  Mapped size            41.82%
  Current LE             6400
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     4096
  Block device           253:8
 

Attachments

Hi,
could it be that the LV was deactivated? Try
Code:
lvchange -ay pve/vm-103-disk-0
and you might need to remove a mounted lock from the failed pct mount command with pct unlock 103.
 
Code:
lvchange -ay pve/vm-103-disk-0
and you might need to remove a mounted lock from the failed pct mount command with pct unlock 103.

unlocked it.. then ran the "lvchange -ay pve/vm-103-disk-0" however it did not boot.. still does not show up under "/dev/pve'
Still get "TASK ERROR: command 'systemctl start pve-container@103' failed: exit code 1 "
 
What is the output of the following?
Code:
lvs
ls /dev/mapper/
dmsetup ls
dmsetup info pve-vm--103--disk--0
 
OK here it is:

** Side note.. I also lost my NFS mount for my backup location about the same time I lost this... trying to go the the directory or... trying to unmount or remount the nfs server just causes my server to hang. Also ran a command to show nfs processes it never came back with anything it hung. Perhaps I just need to delete and remake.. But I have not done that yet.

Code:
root@node01:~# lvs
  LV                                VG  Attr       LSize   Pool Origin        Data%  Meta%  Move Log Cpy%Sync Convert
  data                              pve twi-aotz--   3.40t                    25.00  0.74
  snap_vm-108-disk-1_Fresh          pve Vri---tz-k  40.00g data vm-108-disk-1
  snap_vm-108-disk-1_itemsinstalled pve Vri---tz-k  40.00g data vm-108-disk-1
  vm-100-disk-0                     pve Vwi-aotz--  60.00g data               28.05
  vm-101-disk-0                     pve Vwi-aotz--   1.46t data               23.07
  vm-102-disk-0                     pve Vwi-aotz--  50.00g data               10.59
  vm-103-disk-0                     pve Vwi-a-tz--  25.00g data               41.82
  vm-104-disk-0                     pve Vwi-aotz--  30.00g data               5.49
  vm-105-disk-0                     pve Vwi-a-tz--  40.00g data               8.03
  vm-106-disk-0                     pve Vwi-aotz--  60.00g data               14.23
  vm-107-disk-0                     pve Vwi-aotz--  60.00g data               95.06
  vm-108-disk-0                     pve Vwi-a-tz-- 120.00g data               7.43
  vm-108-disk-1                     pve Vwi-a-tz--  40.00g data               6.00
  vm-109-disk-0                     pve Vwi-aotz-- 200.00g data               94.88
  vm-110-disk-0                     pve Vwi-a-tz--  80.00g data               9.86
  vm-110-disk-1                     pve Vwi-aotz--  80.00g data               18.69
  vm-111-disk-0                     pve Vwi-aotz-- 120.00g data               59.24
  vm-112-disk-0                     pve Vwi-aotz--  30.00g data               38.48
  vm-113-disk-0                     pve Vwi-aotz-- 120.00g data               95.83
root@node01:~# ls /dev/mapper/
control         pve-data-tpool        pve-vm--103--disk--0  pve-vm--107--disk--0  pve-vm--110--disk--0  pve-vm--113--disk--0
pve-data        pve-vm--100--disk--0  pve-vm--104--disk--0  pve-vm--108--disk--0  pve-vm--110--disk--1
pve-data_tdata  pve-vm--101--disk--0  pve-vm--105--disk--0  pve-vm--108--disk--1  pve-vm--111--disk--0
pve-data_tmeta  pve-vm--102--disk--0  pve-vm--106--disk--0  pve-vm--109--disk--0  pve-vm--112--disk--0
root@node01:~# dmsetup ls
pve-vm--102--disk--0    (253:6)
pve-vm--101--disk--0    (253:4)
pve-data-tpool  (253:2)
pve-data_tdata  (253:1)
pve-vm--100--disk--0    (253:5)
pve-vm--109--disk--0    (253:11)
pve-data_tmeta  (253:0)
pve-vm--108--disk--1    (253:19)
pve-vm--108--disk--0    (253:13)
pve-vm--113--disk--0    (253:16)
pve-vm--107--disk--0    (253:10)
pve-data        (253:3)
pve-vm--112--disk--0    (253:15)
pve-vm--106--disk--0    (253:9)
pve-vm--111--disk--0    (253:14)
pve-vm--105--disk--0    (253:12)
pve-vm--110--disk--1    (253:18)
pve-vm--110--disk--0    (253:17)
pve-vm--104--disk--0    (253:7)
pve-vm--103--disk--0    (253:8)
root@node01:~# dmsetup info pve-vm--103--disk--0
Name:              pve-vm--103--disk--0
State:             ACTIVE
Read Ahead:        4096
Tables present:    LIVE
Open count:        0
Event number:      0
Major, minor:      253, 8
Number of targets: 1
UUID: LVM-q02mwqbZ4GPiA2UzSjTA0CKaR8geLe6YkGrVdTkqcDkvD5eXZfu2g6pKw9xaPE2J
 
Last edited:
OK here it is:

** Side note.. I also lost my NFS mount for my backup location about the same time I lost this... trying to go the the directory or... trying to unmount or remount the nfs server just causes my server to hang. Also ran a command to show nfs processes it never came back with anything it hung. Perhaps I just need to delete and remake.. But I have not done that yet.

Feel free to open a new thread for that issue.

Code:
root@node01:~# lvs
  LV                                VG  Attr       LSize   Pool Origin        Data%  Meta%  Move Log Cpy%Sync Convert
  data                              pve twi-aotz--   3.40t                    25.00  0.74
  snap_vm-108-disk-1_Fresh          pve Vri---tz-k  40.00g data vm-108-disk-1
  snap_vm-108-disk-1_itemsinstalled pve Vri---tz-k  40.00g data vm-108-disk-1
  vm-100-disk-0                     pve Vwi-aotz--  60.00g data               28.05
  vm-101-disk-0                     pve Vwi-aotz--   1.46t data               23.07
  vm-102-disk-0                     pve Vwi-aotz--  50.00g data               10.59
  vm-103-disk-0                     pve Vwi-a-tz--  25.00g data               41.82
  vm-104-disk-0                     pve Vwi-aotz--  30.00g data               5.49
  vm-105-disk-0                     pve Vwi-a-tz--  40.00g data               8.03
  vm-106-disk-0                     pve Vwi-aotz--  60.00g data               14.23
  vm-107-disk-0                     pve Vwi-aotz--  60.00g data               95.06
  vm-108-disk-0                     pve Vwi-a-tz-- 120.00g data               7.43
  vm-108-disk-1                     pve Vwi-a-tz--  40.00g data               6.00
  vm-109-disk-0                     pve Vwi-aotz-- 200.00g data               94.88
  vm-110-disk-0                     pve Vwi-a-tz--  80.00g data               9.86
  vm-110-disk-1                     pve Vwi-aotz--  80.00g data               18.69
  vm-111-disk-0                     pve Vwi-aotz-- 120.00g data               59.24
  vm-112-disk-0                     pve Vwi-aotz--  30.00g data               38.48
  vm-113-disk-0                     pve Vwi-aotz-- 120.00g data               95.83
root@node01:~# ls /dev/mapper/
control         pve-data-tpool        pve-vm--103--disk--0  pve-vm--107--disk--0  pve-vm--110--disk--0  pve-vm--113--disk--0
pve-data        pve-vm--100--disk--0  pve-vm--104--disk--0  pve-vm--108--disk--0  pve-vm--110--disk--1
pve-data_tdata  pve-vm--101--disk--0  pve-vm--105--disk--0  pve-vm--108--disk--1  pve-vm--111--disk--0
pve-data_tmeta  pve-vm--102--disk--0  pve-vm--106--disk--0  pve-vm--109--disk--0  pve-vm--112--disk--0
root@node01:~# dmsetup ls
pve-vm--102--disk--0    (253:6)
pve-vm--101--disk--0    (253:4)
pve-data-tpool  (253:2)
pve-data_tdata  (253:1)
pve-vm--100--disk--0    (253:5)
pve-vm--109--disk--0    (253:11)
pve-data_tmeta  (253:0)
pve-vm--108--disk--1    (253:19)
pve-vm--108--disk--0    (253:13)
pve-vm--113--disk--0    (253:16)
pve-vm--107--disk--0    (253:10)
pve-data        (253:3)
pve-vm--112--disk--0    (253:15)
pve-vm--106--disk--0    (253:9)
pve-vm--111--disk--0    (253:14)
pve-vm--105--disk--0    (253:12)
pve-vm--110--disk--1    (253:18)
pve-vm--110--disk--0    (253:17)
pve-vm--104--disk--0    (253:7)
pve-vm--103--disk--0    (253:8)
root@node01:~# dmsetup info pve-vm--103--disk--0
Name:              pve-vm--103--disk--0
State:             ACTIVE
Read Ahead:        4096
Tables present:    LIVE
Open count:        0
Event number:      0
Major, minor:      253, 8
Number of targets: 1
UUID: LVM-q02mwqbZ4GPiA2UzSjTA0CKaR8geLe6YkGrVdTkqcDkvD5eXZfu2g6pKw9xaPE2J

The LV is definitely present and active, but the link below /dev is missing for some reason. Try using
Code:
vgscan --mknodes
This should create the missing /dev/pve/vm-103-disk-0 link.
 
Thank you this indeed fixed the problem... Apparently fixed another vm. You can lock / closet his thread if you want.

Code:
~# vgscan --mknodes
  Reading all physical volumes.  This may take a while...
  Found volume group "pve" using metadata type lvm2
  The link /dev/pve/vm-103-disk-0 should have been created by udev but it was not found. Falling back to direct link creation.
  The link /dev/pve/vm-108-disk-1 should have been created by udev but it was not found. Falling back to direct link creation.
  Command failed with status code 5.
 
You can lock / closet his thread if you want.

I'm not a moderator, so I cannot do that. You can do it by editing the first post and selecting the Solved prefix.