System hang on 'Found volume group....'

grefabu

Member
May 23, 2018
111
6
23
47
Hi,

I've an Server wich hang on Found volume group "pve" using metadata type lvm2

What've I do? I remove an NFS Store and reboot.
I could reoboot it with an older kernel: 4.15.18-28

But then there are no lvm Stores.

I could connect to the WEBGUI and see the lvm listed but with an '?'. In the boot screen I see short something about lv.
journalctl:
Jul 16 14:27:01 hgl-pve001 lvm[405]: 2 logical volume(s) in volume group "pve" monitored
Jul 16 14:27:04 hgl-pve001 systemd[1]: Created slice system-lvm2\x2dpvscan.slice.

But no more info no.

What I could check next?

Bye

Gregor
 

grefabu

Member
May 23, 2018
111
6
23
47
The package version info:

proxmox-ve: 6.2-1 (running kernel: 4.15.18-28-pve) pve-manager: 6.2-4 (running version: 6.2-4/9824574a) pve-kernel-5.4: 6.2-2 pve-kernel-helper: 6.2-2 pve-kernel-5.4.41-1-pve: 5.4.41-1 pve-kernel-4.15: 5.4-17 pve-kernel-4.15.18-28-pve: 4.15.18-56 pve-kernel-4.15.18-12-pve: 4.15.18-36 ceph-fuse: 12.2.11+dfsg1-2.1+b1 corosync: 3.0.3-pve1 criu: 3.11-3 glusterfs-client: 5.5-3 ifupdown: 0.8.35+pve1 ksm-control-daemon: 1.3-1 libjs-extjs: 6.0.1-10 libknet1: 1.15-pve1 libproxmox-acme-perl: 1.0.4 libpve-access-control: 6.1-1 libpve-apiclient-perl: 3.0-3 libpve-common-perl: 6.1-2 libpve-guest-common-perl: 3.0-10 libpve-http-server-perl: 3.0-5 libpve-storage-perl: 6.1-8 libqb0: 1.0.5-1 libspice-server1: 0.14.2-4~pve6+1 lvm2: 2.03.02-pve4 lxc-pve: 4.0.2-1 lxcfs: 4.0.3-pve2 novnc-pve: 1.1.0-1 proxmox-mini-journalreader: 1.1-1 proxmox-widget-toolkit: 2.2-1 pve-cluster: 6.1-8 pve-container: 3.1-6 pve-docs: 6.2-4 pve-edk2-firmware: 2.20200229-1 pve-firewall: 4.1-2 pve-firmware: 3.1-1 pve-ha-manager: 3.0-9 pve-i18n: 2.1-2 pve-qemu-kvm: 5.0.0-2 pve-xtermjs: 4.3.0-1 qemu-server: 6.2-2 smartmontools: 7.1-pve2 spiceterm: 3.1-1 vncterm: 1.6-1 zfsutils-linux: 0.8.4-pve1
 

grefabu

Member
May 23, 2018
111
6
23
47
Ich habe den Rechner mit einer Live CD gestartet. Sieht erst mal ganz gut aus:

Code:
root@siduction:~# pvs
  PV         VG  Fmt  Attr PSize  PFree 
  /dev/sda3  pve lvm2 a--  14,55t 16,00g
root@siduction:~# vgs
  VG  #PV #LV #SN Attr   VSize  VFree 
  pve   1   8   0 wz--n- 14,55t 16,00g
root@siduction:~# lvs -a
  LV              VG  Attr       LSize   Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  data            pve twi---tz--  14,40t                                                    
  [data_tdata]    pve Twi-a-----  14,40t                                                    
  [data_tmeta]    pve ewi-ao----  16,00g                                                    
  [lvol0_pmspare] pve ewi-------  16,00g                                                    
  root            pve -wi-a-----  96,00g                                                    
  swap            pve -wi-a-----   8,00g                                                    
  vm-100-disk-0   pve Vwi---tz-- 120,00g data                                               
  vm-100-disk-1   pve Vwi---tz--  <3,91t data                                               
  vm-101-disk-0   pve Vwi---tz-- 120,00g data                                               
  vm-101-disk-1   pve Vwi---tz--  <3,91t data                                               
  vm-102-disk-0   pve Vwi---tz--  80,00g data                                              


root@siduction:~# vgdisplay  
  --- Volume group ---
VG Name pve
System ID
Format lvm2
Metadata Areas 1
Metadata Sequence No 17
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 8
Open LV 0
Max PV 0
Cur PV 1
Act PV 1
VG Size 14,55 TiB
PE Size 4,00 MiB
Total PE 3815039
Alloc PE / Size 3810943 / <14,54 TiB
Free PE / Size 4096 / 16,00 GiB
VG UUID 4NNviB-iG5q-tflc-WmWh-DbpN-6i7U-z4fpbj

root@siduction:~# lvdisplay  
  --- Logical volume ---
LV Path /dev/pve/swap
LV Name swap
VG Name pve
LV UUID DxBy9M-ujNT-TtfL-oBrn-f9N1-ELg6-WifVoe
LV Write Access read/write
LV Creation host, time proxmox, 2020-05-19 10:15:45 +0200
LV Status available
# open 0
LV Size 8,00 GiB
Current LE 2048
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:0

--- Logical volume ---
LV Path /dev/pve/root
LV Name root
VG Name pve
LV UUID QPFBx6-JmmM-tWXg-B4g5-WTpE-NMJ9-25fpCI
LV Write Access read/write
LV Creation host, time proxmox, 2020-05-19 10:15:45 +0200
LV Status available
# open 0
LV Size 96,00 GiB
Current LE 24576
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:1

--- Logical volume ---
LV Name data
VG Name pve
LV UUID 1yqRI3-UI1L-TVud-NSwg-Ygib-uFRb-OC25tL
LV Write Access read/write
LV Creation host, time proxmox, 2020-05-19 10:15:46 +0200
LV Pool metadata data_tmeta
LV Pool data data_tdata
LV Status available
# open 6
LV Size 14,40 TiB
Allocated pool data 54,64%
Allocated metadata 25,11%
Current LE 3776127
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:4

--- Logical volume ---
LV Path /dev/pve/vm-100-disk-0
LV Name vm-100-disk-0
VG Name pve
LV UUID VLNhwn-8Q9P-mXVU-Rhm8-5IRV-1ydr-kJnMUv
LV Write Access read/write
LV Creation host, time hgl-pve001, 2020-05-19 11:48:30 +0200
LV Pool name data
LV Status available
# open 0
LV Size 120,00 GiB
Mapped size 25,35%
Current LE 30720
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:6

--- Logical volume ---
LV Path /dev/pve/vm-101-disk-0
LV Name vm-101-disk-0
VG Name pve
LV UUID 2ZomPO-NGQd-2QK2-BArF-wZUL-xede-dieyjB
LV Write Access read/write
LV Creation host, time hgl-pve001, 2020-05-19 12:18:09 +0200
LV Pool name data
LV Status available
# open 0
LV Size 120,00 GiB
Mapped size 27,30%
Current LE 30720
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:7

--- Logical volume ---
LV Path /dev/pve/vm-101-disk-1
LV Name vm-101-disk-1
VG Name pve
LV UUID g8rNa7-AIW8-MuBj-a5Tu-932C-nR1C-bQ9QMj
LV Write Access read/write
LV Creation host, time hgl-pve001, 2020-05-25 10:08:36 +0200
LV Pool name data
LV Status available
# open 0
LV Size <3,91 TiB
Mapped size 99,88%
Current LE 1024000
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:8

--- Logical volume ---
LV Path /dev/pve/vm-100-disk-1
LV Name vm-100-disk-1
VG Name pve
LV UUID PLapnm-nSk7-uR23-Svof-dDaq-BtP6-0rhkCu
LV Write Access read/write
LV Creation host, time hgl-pve001, 2020-05-25 10:09:00 +0200
LV Pool name data
LV Status available
# open 0
LV Size <3,91 TiB
Mapped size 99,88%
Current LE 1024000
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:9

--- Logical volume ---
LV Path /dev/pve/vm-102-disk-0
LV Name vm-102-disk-0
VG Name pve
LV UUID epJPiE-Jawa-RJMf-86cz-BDb9-UYnV-6zRASk
LV Write Access read/write
LV Creation host, time hgl-pve001, 2020-06-25 13:42:25 +0200
LV Pool name data
LV Status available
# open 0
LV Size 80,00 GiB
Mapped size 8,30%
Current LE 20480
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:10

Jetzt die Frage: wie bekomme ich den Host wieder zum starten?
Ach ja ein Zugriff funktioniert soweit:

Code:
root@siduction:~# mount /dev/pve/root /mnt/
root@siduction:~# ls -l /mnt/
insgesamt 108
drwxr-xr-x 2 root root 4096 Mai 20 16:07 bin
drwxr-xr-x 5 root root 4096 Jul 16 14:42 boot
drwxr-xr-x 5 root root 4096 Apr 10 2019 dev
drwxr-xr-x 114 root root 12288 Jul 16 14:49 etc
drwxr-xr-x 3 root root 4096 Mai 20 16:23 home
drwxr-xr-x 19 root root 4096 Mai 26 13:22 lib
drwxr-xr-x 2 root root 4096 Mai 19 11:23 lib64
drwx------ 2 root root 16384 Mai 19 10:15 lost+found
drwxr-xr-x 2 root root 4096 Apr 10 2019 media
drwxr-xr-x 3 root root 4096 Jun 17 14:47 mnt
drwxr-xr-x 2 root root 4096 Apr 10 2019 opt
drwxr-xr-x 2 root root 4096 Feb 3 2019 proc
drwx------ 16 root root 4096 Jul 16 14:39 root
drwxr-xr-x 16 root root 4096 Mai 19 10:19 run
drwxr-xr-x 2 root root 12288 Mai 26 13:22 sbin
drwxr-xr-x 2 root root 4096 Apr 10 2019 srv
drwxr-xr-x 2 root root 4096 Feb 3 2019 sys
drwxrwxrwt 7 root root 4096 Jul 16 15:11 tmp
drwxr-xr-x 10 root root 4096 Apr 10 2019 usr
drwxr-xr-x 11 root root 4096 Apr 10 2019 var
 

grefabu

Member
May 23, 2018
111
6
23
47
Kann man die disk der VM auch direkt mounten?
Mit der 102 kann ich gut testen:

Code:
root@siduction:~# mount /dev/pve/vm-102-disk-0 /mnt/
mount: /mnt: wrong fs type, bad option, bad superblock on /dev/mapper/pve-vm--102--disk--0, missing codepage or helper program, or other error.
 

grefabu

Member
May 23, 2018
111
6
23
47
Sorry, change to german,...

a additional question: is it possible to write data from /dev/pve/vm-102-disk-0 with dd to an image?
fourtionaly the small partition are more interresting to recue, so will a 'dd if=/dev/pve/vm-102-disk-0 of=/mnt/USB/image.img bs=...' work?
 

grefabu

Member
May 23, 2018
111
6
23
47
OK, time to breath.

I got an tip from the mailing list to use dd and it work for me with on the first test! I could get the img to my desctop system, convert it to vdi and could start the system.

Now the next,...
 

grefabu

Member
May 23, 2018
111
6
23
47
Surprise the original system is running! And it seem that the error is the long boot time.

After I start the system and it hang on the lvm thing it took 25 minutes to get online! Yeah, so long I didn't whait.

The time is ~ between the grub start and the first entry in the journalctl boot log.

Two things:

1. I've learned more about the system

2. Why it took so long, any idea? Could I do anything else?
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE and Proxmox Mail Gateway. We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get your own in 60 seconds.

Buy now!