BIOS reset & thin pool stopped working

Daniel P. Clark

New Member
Jan 5, 2018
7
0
1
40
After a bios reset I'm not able to use any of the virtual machine drives from the thin pool. I've tried following instructions for repair:

Code:
lvconvert --repair pve/data

But I get the error

Code:
Only inactive pool can be repaired.

I need to figure out how to make all of them inactive to do the repair. Help please.

Here's the layout:

Code:
lvscan
  ACTIVE            '/dev/pve/swap' [7.00 GiB] inherit
  ACTIVE            '/dev/pve/root' [37.00 GiB] inherit
  ACTIVE            '/dev/pve/data' [1.13 TiB] inherit
  ACTIVE            '/dev/pve/vm-100-disk-1' [8.00 GiB] inherit
  inactive          '/dev/pve/snap_vm-100-disk-1_Ideal_Start' [8.00 GiB] inherit
  ACTIVE            '/dev/pve/vm-101-disk-1' [18.00 GiB] inherit
  ACTIVE            '/dev/pve/vm-102-disk-2' [4.00 MiB] inherit
  ACTIVE            '/dev/pve/vm-102-disk-1' [32.00 GiB] inherit
  ACTIVE            '/dev/pve/vm-103-disk-1' [12.00 GiB] inherit
  ACTIVE            '/dev/pve/vm-103-disk-2' [4.00 MiB] inherit
  ACTIVE            '/dev/pve/vm-104-disk-1' [12.00 GiB] inherit
  inactive          '/dev/pve/snap_vm-102-disk-1_OriginalInstall' [32.00 GiB] inherit
  inactive          '/dev/pve/snap_vm-102-disk-2_OriginalInstall' [4.00 MiB] inherit
  ACTIVE            '/dev/pve/vm-105-disk-1' [40.00 GiB] inherit
  ACTIVE            '/dev/pve/vm-106-disk-1' [100.00 GiB] inherit
  ACTIVE            '/dev/pve/vm-107-disk-1' [16.00 GiB] inherit
  ACTIVE            '/dev/pve/vm-108-disk-1' [8.00 GiB] inherit
Code:
lvmdiskscan
  /dev/sda2 [     256.00 MiB]
  /dev/sda3 [     148.80 GiB] LVM physical volume
  /dev/sdb1 [     465.76 GiB] LVM physical volume
  /dev/sdc1 [     119.24 GiB] LVM physical volume
  /dev/sdd1 [     232.83 GiB] LVM physical volume
  /dev/sde1 [     232.83 GiB] LVM physical volume
  0 disks
  1 partition
  0 LVM physical volume whole disks
  5 LVM physical volumes
 
Last edited:
I was able to make each disk inactive with:

Code:
lvchange -a n pve/data
lvchange -a n pve/vm-100-disk-1
lvchange -a n pve/vm-101-disk-1
lvchange -a n pve/vm-102-disk-1
lvchange -a n pve/vm-102-disk-2
lvchange -a n pve/vm-103-disk-1
lvchange -a n pve/vm-103-disk-2
lvchange -a n pve/vm-104-disk-1
lvchange -a n pve/vm-105-disk-1
lvchange -a n pve/vm-106-disk-1
lvchange -a n pve/vm-107-disk-1
lvchange -a n pve/vm-108-disk-1

And then I was able to run:

Code:
lvconvert --repair pve/data
  Using default stripesize 64.00 KiB.
  WARNING: recovery of pools without pool metadata spare LV is not automated.
  WARNING: If everything works, remove pve/data_meta0 volume.
  WARNING: Use pvmove command to move pve/data_tmeta on the best fitting PV.

But after reactivating them with `lvchange -a y` it still doesn't work.
 
My theory is the hard drives UUID all changed after a BIOS reset. Maybe even their order in the BIOS. So I think even though ProxMox has found all of the drives and partitions it may not know where things belong. Just my guess.

Code:
lsblk -f
NAME                         FSTYPE      LABEL UUID                                   MOUNTPOINT
sda
├─sda1
├─sda2                       vfat              B226-9259
└─sda3                       LVM2_member       heYtOl-fGt9-8RKa-53sF-USkf-To8Z-CU5Yst
  ├─pve-swap                 swap              ec923c31-9d4f-4761-8ffb-80dc46e4046c   [SWAP]
  ├─pve-root                 ext4              59add7dc-b612-4754-9f4e-b7272cb159f0   /
  ├─pve-data_meta0
  ├─pve-data_tmeta
  │ └─pve-data-tpool
  │   ├─pve-data
  │   ├─pve-vm--100--disk--1 ext4              92c23798-c46c-40df-bc9c-932fb350b017
  │   ├─pve-vm--101--disk--1 ext4              10895461-ec23-4124-b848-50c6b28a41c1
  │   ├─pve-vm--102--disk--2
  │   ├─pve-vm--102--disk--1
  │   ├─pve-vm--103--disk--1
  │   ├─pve-vm--103--disk--2
  │   ├─pve-vm--104--disk--1
  │   ├─pve-vm--105--disk--1
  │   ├─pve-vm--106--disk--1
  │   ├─pve-vm--107--disk--1
  │   └─pve-vm--108--disk--1 ext4              a998dad4-ea9f-4097-a38e-addf8de4b3a3
  └─pve-data_tdata
    └─pve-data-tpool
      ├─pve-data
      ├─pve-vm--100--disk--1 ext4              92c23798-c46c-40df-bc9c-932fb350b017
      ├─pve-vm--101--disk--1 ext4              10895461-ec23-4124-b848-50c6b28a41c1
      ├─pve-vm--102--disk--2
      ├─pve-vm--102--disk--1
      ├─pve-vm--103--disk--1
      ├─pve-vm--103--disk--2
      ├─pve-vm--104--disk--1
      ├─pve-vm--105--disk--1
      ├─pve-vm--106--disk--1
      ├─pve-vm--107--disk--1
      └─pve-vm--108--disk--1 ext4              a998dad4-ea9f-4097-a38e-addf8de4b3a3
sdb
└─sdb1                       LVM2_member       ea8l7N-Ilyd-kaVa-Ntss-iGc4-VbNm-ubt4zh
  └─pve-data_tdata
    └─pve-data-tpool
      ├─pve-data
      ├─pve-vm--100--disk--1 ext4              92c23798-c46c-40df-bc9c-932fb350b017
      ├─pve-vm--101--disk--1 ext4              10895461-ec23-4124-b848-50c6b28a41c1
      ├─pve-vm--102--disk--2
      ├─pve-vm--102--disk--1
      ├─pve-vm--103--disk--1
      ├─pve-vm--103--disk--2
      ├─pve-vm--104--disk--1
      ├─pve-vm--105--disk--1
      ├─pve-vm--106--disk--1
      ├─pve-vm--107--disk--1
      └─pve-vm--108--disk--1 ext4              a998dad4-ea9f-4097-a38e-addf8de4b3a3
sdc
└─sdc1                       LVM2_member       NDLEyf-P3Mr-6fAd-wbJP-2IBZ-8L2R-1AKXMu
  └─pve-data_tdata
    └─pve-data-tpool
      ├─pve-data
      ├─pve-vm--100--disk--1 ext4              92c23798-c46c-40df-bc9c-932fb350b017
      ├─pve-vm--101--disk--1 ext4              10895461-ec23-4124-b848-50c6b28a41c1
      ├─pve-vm--102--disk--2
      ├─pve-vm--102--disk--1
      ├─pve-vm--103--disk--1
      ├─pve-vm--103--disk--2
      ├─pve-vm--104--disk--1
      ├─pve-vm--105--disk--1
      ├─pve-vm--106--disk--1
      ├─pve-vm--107--disk--1
      └─pve-vm--108--disk--1 ext4              a998dad4-ea9f-4097-a38e-addf8de4b3a3
sdd
└─sdd1                       LVM2_member       8qHuJS-KTWp-iirz-wJDy-GpSx-uVbu-cIHTBB
  └─pve-data_tdata
    └─pve-data-tpool
      ├─pve-data
      ├─pve-vm--100--disk--1 ext4              92c23798-c46c-40df-bc9c-932fb350b017
      ├─pve-vm--101--disk--1 ext4              10895461-ec23-4124-b848-50c6b28a41c1
      ├─pve-vm--102--disk--2
      ├─pve-vm--102--disk--1
      ├─pve-vm--103--disk--1
      ├─pve-vm--103--disk--2
      ├─pve-vm--104--disk--1
      ├─pve-vm--105--disk--1
      ├─pve-vm--106--disk--1
      ├─pve-vm--107--disk--1
      └─pve-vm--108--disk--1 ext4              a998dad4-ea9f-4097-a38e-addf8de4b3a3
sde
└─sde1                       LVM2_member       79hwpV-zOZq-LaPy-GGIZ-HZFu-0u1d-kAUrkS
  └─pve-data_tdata
    └─pve-data-tpool
      ├─pve-data
      ├─pve-vm--100--disk--1 ext4              92c23798-c46c-40df-bc9c-932fb350b017
      ├─pve-vm--101--disk--1 ext4              10895461-ec23-4124-b848-50c6b28a41c1
      ├─pve-vm--102--disk--2
      ├─pve-vm--102--disk--1
      ├─pve-vm--103--disk--1
      ├─pve-vm--103--disk--2
      ├─pve-vm--104--disk--1
      ├─pve-vm--105--disk--1
      ├─pve-vm--106--disk--1
      ├─pve-vm--107--disk--1
      └─pve-vm--108--disk--1 ext4              a998dad4-ea9f-4097-a38e-addf8de4b3a3

Code:
lvs -v
  LV                                       VG  #Seg Attr       LSize   Maj Min KMaj KMin Pool Origin        Data%  Meta%  Move Cpy%Sync Log Convert LV UUID                           LProfile
  data                                     pve    1 twi-aotz--   1.13t  -1  -1  253    5                    10.80  83.50                            vOWwiw-UnJY-kHIE-N6ma-CuFa-RsOg-dkXlQE
  data_meta0                               pve    1 -wi-a-----  92.00m  -1  -1  253    2                                                            5qdiM7-63Gb-i7S6-jRA1-MzU6-JWKT-9wRBmt
  root                                     pve    1 -wi-ao----  37.00g  -1  -1  253    1                                                            fEkwzd-jWyA-FV3h-c73R-Aeki-uKrM-10SeBO
  vm-100-disk-1                            pve    1 Vwi-aotz--   8.00g  -1  -1  253    7 data               93.68                                   B82QrS-x2ee-5qxS-PWbZ-sWBk-Qy7E-6OT8WC
  vm-101-disk-1                            pve    1 Vwi-aotz--  18.00g  -1  -1  253    8 data               29.46                                   U1daa0-KnkC-4zlb-su5P-CWbj-eCrV-j5o9os
  vm-102-disk-1                            pve    1 Vwi-a-tz--  32.00g  -1  -1  253   10 data               98.93                                   Jpka02-xoma-6jdo-MyYo-MoE3-9qxC-99Xynk
  vm-102-disk-2                            pve    1 Vwi-a-tz--   4.00m  -1  -1  253    9 data               3.12                                    LiHSCx-EEd5-zro0-Vs9F-i0y8-Q6Hy-8LX5yN
  vm-103-disk-1                            pve    1 Vwi-a-tz--  12.00g  -1  -1  253   11 data               81.10                                   TaF5Eo-nxdh-rKCW-EbmD-bgXQ-HaFM-3YWPqs
  vm-103-disk-2                            pve    1 Vwi-a-tz--   4.00m  -1  -1  253   12 data               3.12                                    NpHy2E-8UCB-5czO-PTbm-L2QI-R70S-8YJvyO
  vm-104-disk-1                            pve    1 Vwi-a-tz--  12.00g  -1  -1  253   13 data               93.99                                   42YhpD-kvbp-thTQ-z7Jm-gUPw-8T82-bAyZTa
  vm-105-disk-1                            pve    1 Vwi-a-tz--  40.00g  -1  -1  253   14 data               0.00                                    sQB1v7-30R5-FXWR-L03Y-HO3o-JZpG-KZbHkU
  vm-106-disk-1                            pve    1 Vwi-a-tz-- 100.00g  -1  -1  253   15 data               42.13                                   A4J0bu-bwvv-hRBL-Pha2-SI6k-5L60-7EQqBq
  vm-107-disk-1                            pve    1 Vwi-a-tz--  16.00g  -1  -1  253   16 data               93.13                                   xPHM6Z-KiCF-hgac-gyr1-DnkA-JT49-Z8vpsS
  vm-108-disk-1                            pve    1 Vwi-a-tz--   8.00g  -1  -1  253   17 data               9.60                                    29v59U-rDw3-vZqV-RIZY-Z62R-4luj-KIKCzM
 
Daniel, could you elaborate on the BIOS setting? Are you talking about the PVE host? I ran into the same issue. Filed up the metadata pool, did the repair and right now I can activate the pve/data but not any of the thin provisioned volumes:

Code:
 lvchange -ay pve/vm-100-disk-5
  device-mapper: reload ioctl on  failed: No data available

And I would love to get my VMs back up.