Help !!!! PVE after shutdown => boot, all LVM is lost

luckbyq

New Member
Oct 16, 2023
3
0
1
root@s0101:/dev/mapper# pvscan
PV /dev/sda3 VG pve lvm2 [<5.46 TiB / <16.38 GiB free]
Total: 1 [<5.46 TiB] / in use: 1 [<5.46 TiB] / in no VG: 0 [0 ]
root@s0101:/dev/mapper# vgscan
Found volume group "pve" using metadata type lvm2
root@s0101:/dev/mapper# vgcfgrestore --list pve

File: /etc/lvm/archive/pve_00108-2128938000.vg
VG name: pve
Description: Created *before* executing '/sbin/lvremove -f pve/vm-101101019-disk-1'
Backup Time: Tue Nov 8 11:53:53 2022


File: /etc/lvm/archive/pve_00109-1011007305.vg
VG name: pve
Description: Created *before* executing '/sbin/lvremove -f pve/vm-101101081-disk-0'
Backup Time: Tue Nov 8 13:18:34 2022


File: /etc/lvm/archive/pve_00110-108097887.vg
VG name: pve
Description: Created *before* executing '/sbin/lvcreate -aly -V 209715200k --name vm-101106011-disk-2 --thinpool pve/data'
Backup Time: Tue Nov 8 13:21:27 2022


File: /etc/lvm/archive/pve_00111-2101757462.vg
VG name: pve
Description: Created *before* executing '/sbin/lvremove -f pve/vm-101106011-disk-1'
Backup Time: Tue Nov 8 13:58:33 2022


File: /etc/lvm/archive/pve_00112-385144992.vg
VG name: pve
Description: Created *before* executing '/sbin/lvremove -f pve/vm-101109021-disk-1'
Backup Time: Tue Nov 8 14:39:13 2022


File: /etc/lvm/archive/pve_00113-1048682573.vg
VG name: pve
Description: Created *before* executing '/sbin/lvcreate -aly -V 209715200k --name vm-101106021-disk-2 --thinpool pve/data'
Backup Time: Tue Nov 8 14:42:25 2022


File: /etc/lvm/archive/pve_00114-578271404.vg
VG name: pve
Description: Created *before* executing '/sbin/lvremove -f pve/vm-101106021-disk-1'
Backup Time: Tue Nov 8 15:13:53 2022


File: /etc/lvm/archive/pve_00115-1019094574.vg
VG name: pve
Description: Created *before* executing '/sbin/lvremove -f pve/vm-101104051-disk-0'
Backup Time: Wed Nov 9 14:13:22 2022


File: /etc/lvm/archive/pve_00116-1852366100.vg
VG name: pve
Description: Created *before* executing '/sbin/vgs --separator : --noheadings --units b --unbuffered --nosuffix --options vg_name,vg_size,vg_free,lv_count'
Backup Time: Mon Oct 16 12:05:40 2023


File: /etc/lvm/archive/pve_00117-511380659.vg
VG name: pve
Description: Created *before* executing '/sbin/vgs --separator : --noheadings --units b --unbuffered --nosuffix --options vg_name,vg_size,vg_free,lv_count'
Backup Time: Mon Oct 16 12:05:40 2023


File: /etc/lvm/backup/pve
VG name: pve
Description: Created *after* executing '/sbin/vgs --separator : --noheadings --units b --unbuffered --nosuffix --options vg_name,vg_size,vg_free,lv_count'
Backup Time: Mon Oct 16 12:05:40 2023





Then I try to fix


root@s0101:/dev/mapper# vgcfgrestore pve --test
TEST MODE: Metadata will NOT be updated and volumes will not be (de)activated.
Volume group pve has active volume: data_tdata.
Volume group pve has active volume: data_tmeta.
Volume group pve has active volume: swap.
Volume group pve has active volume: root.
WARNING: Found 4 active volume(s) in volume group "pve".
Restoring VG with active LVs, may cause mismatch with its metadata.
Do you really want to proceed with restore of volume group "pve", while 4 volume(s) are active? [y/n]: y
Consider using option --force to restore Volume Group pve with thin volumes.
Restore failed.
root@s0101:/dev/mapper# vgcfgrestore pve --test --force
TEST MODE: Metadata will NOT be updated and volumes will not be (de)activated.
Volume group pve has active volume: data_tdata.
Volume group pve has active volume: data_tmeta.
Volume group pve has active volume: swap.
Volume group pve has active volume: root.
WARNING: Found 4 active volume(s) in volume group "pve".
Restoring VG with active LVs, may cause mismatch with its metadata.
Do you really want to proceed with restore of volume group "pve", while 4 volume(s) are active? [y/n]: y
WARNING: Forced restore of Volume Group pve with thin volumes.
Restored volume group pve.
root@s0101:/dev/mapper# vgcfgrestore pve --force
Volume group pve has active volume: data_tdata.
Volume group pve has active volume: data_tmeta.
Volume group pve has active volume: swap.
Volume group pve has active volume: root.
WARNING: Found 4 active volume(s) in volume group "pve".
Restoring VG with active LVs, may cause mismatch with its metadata.
Do you really want to proceed with restore of volume group "pve", while 4 volume(s) are active? [y/n]: y
WARNING: Forced restore of Volume Group pve with thin volumes.
Restored volume group pve.

However, using fdisk -l still does not have lvm partitions.
 
root@s0101:/dev/mapper# fdisk -l
Disk /dev/sda: 5.46 TiB, 6000069312512 bytes, 11718885376 sectors
Disk model: PERC H710P
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: E4DE2D91-B3A7-4C47-9689-559CB13E12CB

Device Start End Sectors Size Type
/dev/sda1 2048 1050619 1048572 512M EFI System
/dev/sda2 1050624 11718885342 11717834719 5.5T Linux LVM


Disk /dev/mapper/pve-swap: 8 GiB, 8589934592 bytes, 16777216 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/mapper/pve-root: 96 GiB, 103079215104 bytes, 201326592 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
 
I executed "vgcfgrestore pve --force" "lvchange -ay pve/data" and lvconvert --repair pve/data, restarted the server, and now it has been repaired, thank you!!
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!