LVM pool inactive on cluster

Cedroid

Member
Jul 19, 2019
1
0
6
After an update of PROXMOX, 5.3 => 5.4, I'm not able to use any of the virtual machine drives from the lvm-thin pool on DELL MD3400 and MD1220. When I try to start a VM from this pool I receive this error message:
storage 'lvm-thin-disk-migration' does not exists !

I've tried following commands to get the messages errors :
---------------------------------------------------------------------------------------------------------------------
root@pve1:~# lvscan

WARNING: Not using lvmetad because config setting use_lvmetad=0.
WARNING: To avoid corruption, rescan devices to make changes visible (pvscan --cache).
ACTIVE '/dev/pve/swap' [8.00 GiB] inherit
ACTIVE '/dev/pve/root' [96.00 GiB] inherit
ACTIVE '/dev/pve/data' [429.11 GiB] inherit
ACTIVE '/dev/pve/vm-104-disk-0' [32.00 GiB] inherit
inactive '/dev/lvm-thin-disk-migration/lvm-thin-disk-migration' [6.99 TiB] inherit
inactive '/dev/lvm-thin-disk-migration/vm-106-disk-0' [500.00 GiB] inherit
inactive '/dev/lvm-thin-disk-migration/vm-106-disk-1' [300.00 GiB] inherit
inactive '/dev/lvm-thin-disk-migration/vm-108-disk-0' [200.00 GiB] inherit
inactive '/dev/lvm-thin-disk-migration/vm-108-disk-1' [200.00 GiB] inherit
inactive '/dev/lvm-thin-disk-migration/vm-108-disk-2' [100.00 GiB] inherit
inactive '/dev/lvm-thin-disk-migration/vm-111-disk-0' [300.00 GiB] inherit
inactive '/dev/CGU_7.2K_ThinLvm/CGU_7.2K_ThinLvm' [3.91 TiB] inherit
inactive '/dev/CGU_10K_ThinLvm/CGU_10K_ThinLvm' [299.85 GiB] inherit

ACTIVE '/dev/FCU_10k/vm-107-disk-0' [32.00 GiB] inherit
ACTIVE '/dev/FCU_10k/vm-109-disk-0' [10.00 GiB] inherit
ACTIVE '/dev/lvn-thin-test/lvn-thin-test' [49.89 GiB] inherit

-----------------------------------------------------------------------------

Capture d’écran du 2019-10-25 13-47-40.png


root@pve1:~# systemctl status lvm2-activation.service

● lvm2-activation.service - Activation of LVM2 logical volumes

Loaded: loaded (/etc/lvm/lvm.conf; generated; vendor preset: enabled)

Active: failed (Result: exit-code) since Fri 2019-10-25 10:52:54 CEST; 18s ago

Docs: man:lvm2-activation-generator(8)

Process: 13710 ExecStart=/sbin/lvm vgchange -aay --ignoreskippedcluster (code=exited, status=5)

Main PID: 13710 (code=exited, status=5)

CPU: 150ms



Oct 25 10:52:54 pve1 lvm[13710]: Check of pool CGU_7.2K_ThinLvm/CGU_7.2K_ThinLvm failed (status:1). Manual repair required!

Oct 25 10:52:54 pve1 lvm[13710]: 0 logical volume(s) in volume group "CGU_7.2K_ThinLvm" now active

Oct 25 10:52:54 pve1 lvm[13710]: Check of pool CGU_10K_ThinLvm/CGU_10K_ThinLvm failed (status:1). Manual repair required!

Oct 25 10:52:54 pve1 lvm[13710]: 0 logical volume(s) in volume group "CGU_10K_ThinLvm" now active

Oct 25 10:52:54 pve1 lvm[13710]: 2 logical volume(s) in volume group "FCU_10k" now active

Oct 25 10:52:54 pve1 lvm[13710]: 1 logical volume(s) in volume group "lvn-thin-test" now active

Oct 25 10:52:54 pve1 systemd[1]: lvm2-activation.service: Main process exited, code=exited, status=5/NOTINSTALLED

Oct 25 10:52:54 pve1 systemd[1]: Failed to start Activation of LVM2 logical volumes.

Oct 25 10:52:54 pve1 systemd[1]: lvm2-activation.service: Unit entered failed state.

Oct 25 10:52:54 pve1 systemd[1]: lvm2-activation.service: Failed with result 'exit-code'.

-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
root@pve1:~# systemctl list-units --state=failed

UNIT LOAD ACTIVE SUB DESCRIPTION
● lvm2-activation-early.service loaded failed failed Activation of LVM2 logical volumes
● lvm2-activation-net.service loaded failed failed Activation of LVM2 logical volumes
● lvm2-activation.service loaded failed failed Activation of LVM2 logical volumes

LOAD = Reflects whether the unit definition was properly loaded.
ACTIVE = The high-level unit activation state, i.e. generalization of SUB.
SUB = The low-level unit activation state, values depend on unit type.

3 loaded units listed. Pass --all to see loaded but inactive units, too.
To show all installed unit files use 'systemctl list-unit-files'.
--------------------------------------------------------------------------------------------------------------------------
---------------------------------------------------------------------------------------------------------------------------
root@pve1:~# systemctl status --full lvm2-activation.service

● lvm2-activation.service - Activation of LVM2 logical volumes
Loaded: loaded (/etc/lvm/lvm.conf; generated; vendor preset: enabled)
Active: failed (Result: exit-code) since Mon 2019-10-28 13:58:29 CET; 1h 5min ago
Docs: man:lvm2-activation-generator(8)
Main PID: 2281 (code=exited, status=5)
CPU: 166ms

Oct 28 13:58:29 pve1 lvm[2281]: Check of pool CGU_7.2K_ThinLvm/CGU_7.2K_ThinLvm failed (status:1). Manu
Oct 28 13:58:29 pve1 lvm[2281]: 0 logical volume(s) in volume group "CGU_7.2K_ThinLvm" now active
Oct 28 13:58:29 pve1 lvm[2281]: Check of pool CGU_10K_ThinLvm/CGU_10K_ThinLvm failed (status:1). Manual
Oct 28 13:58:29 pve1 lvm[2281]: 0 logical volume(s) in volume group "CGU_10K_ThinLvm" now active
Oct 28 13:58:29 pve1 lvm[2281]: 2 logical volume(s) in volume group "FCU_10k" now active
Oct 28 13:58:29 pve1 lvm[2281]: 1 logical volume(s) in volume group "lvn-thin-test" now active
Oct 28 13:58:29 pve1 systemd[1]: lvm2-activation.service: Main process exited, code=exited, status=5/NOTI
Oct 28 13:58:29 pve1 systemd[1]: Failed to start Activation of LVM2 logical volumes.
Oct 28 13:58:29 pve1 systemd[1]: lvm2-activation.service: Unit entered failed state.
Oct 28 13:58:29 pve1 systemd[1]: lvm2-activation.service: Failed with result 'exit-code'.
lines 1-17/17 (END)
--------------------------------------------------------------------------------------------------------
root@pve1:~# systemctl status --full lvm2-activation-early.service

● lvm2-activation-early.service - Activation of LVM2 logical volumes
Loaded: loaded (/etc/lvm/lvm.conf; generated; vendor preset: enabled)
Active: failed (Result: exit-code) since Mon 2019-10-28 14:43:56 CET; 22min ago
Docs: man:lvm2-activation-generator(8)
Main PID: 3060 (code=exited, status=5)
CPU: 145ms

Oct 28 14:43:55 pve1 lvm[3060]: 4 logical volume(s) in volume group "pve" now active
Oct 28 14:43:55 pve1 lvm[3060]: Check of pool lvm-thin-disk-migration/lvm-thin-disk-migration failed (s
Oct 28 14:43:56 pve1 lvm[3060]: 0 logical volume(s) in volume group "lvm-thin-disk-migration" now activ
Oct 28 14:43:56 pve1 lvm[3060]: 2 logical volume(s) in volume group "FCU_10k" now active
Oct 28 14:43:56 pve1 lvm[3060]: Check of pool CGU_10K_ThinLvm/CGU_10K_ThinLvm failed (status:1). Manual
Oct 28 14:43:56 pve1 lvm[3060]: 0 logical volume(s) in volume group "CGU_10K_ThinLvm" now active
Oct 28 14:43:56 pve1 systemd[1]: lvm2-activation-early.service: Main process exited, code=exited, status=
Oct 28 14:43:56 pve1 systemd[1]: Failed to start Activation of LVM2 logical volumes.
Oct 28 14:43:56 pve1 systemd[1]: lvm2-activation-early.service: Unit entered failed state.
Oct 28 14:43:56 pve1 systemd[1]: lvm2-activation-early.service: Failed with result 'exit-code'.
lines 1-17/17 (END)
--------------------------------------------------------------------------
root@pve2:~# lvdisplay /dev/lvm-thin-Disks-migration/lvm-thin-disk-migration

--- Logical volume ---
LV Name lvm-thin-disk-migration
VG Name lvm-thin-Disks-migration
LV UUID IzfLZs-MJAB-mhUO-wbG2-Nvev-f5KE-3brrzj
LV Write Access read/write
LV Creation host, time pve1, 2019-10-22 14:53:26 +0200
LV Pool metadata lvm-thin-disk-migration_tmeta
LV Pool data lvm-thin-disk-migration_tdata
LV Status NOT available
LV Size 6.99 TiB
Current LE 1833356
Segments 1
Allocation inherit
Read ahead sectors auto
-----------------------------------------------------------------------------------------------------------------------------------------------
root@pve2:~# vgchange -a y

5 logical volume(s) in volume group "pve" now active
2 logical volume(s) in volume group "LVM-thin-recovert-test" now active
Check of pool lvm-thin-Disks-migration/lvm-thin-disk-migration failed (status:1). Manual repair required!
0 logical volume(s) in volume group "lvm-thin-Disks-migration" now active
Check of pool CGU_7.2K_ThinLvm/CGU_7.2K_ThinLvm failed (status:1). Manual repair required!
0 logical volume(s) in volume group "CGU_7.2K_ThinLvm" now active
Check of pool CGU_10K_ThinLvm/CGU_10K_ThinLvm failed (status:1). Manual repair required!
0 logical volume(s) in volume group "CGU_10K_ThinLvm" now active
1 logical volume(s) in volume group "LVM-thin-10K_new_migration" now active
4 logical volume(s) in volume group "FCU_10k" now active
device-mapper: reload ioctl on (253:25) failed: No data available
1 logical volume(s) in volume group "lvn-thin-test" now active


maybe somebody could help me with this !!!
Thanks !
 
Last edited:
Where you be able to resolve the issue?

Check of pool lvm-thin-Disks-migration/lvm-thin-disk-migration failed (status:1). Manual repair required!
Did the pool run full (vgs, lvs)? Anything in the logs?
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!