local-LVM not available after Kernel update on PVE 7

fiona

Proxmox Staff Member
Staff member
Aug 1, 2019
2,436
483
88
@ShEV even though your setup is a bit different, the place where pvscan supposedly crashes is the same (right after executing the metadata check) and there also is a second LVM command running at the same time. I suggest reporting the issue to Debian or Red Hat.

Here is a preliminary patch that would serve as a workaround in Proxmox VE (not reviewed yet!), but of course the underlying issue with autoactivation failing is not solved by that.
 
Jul 27, 2020
14
0
1
42
I don't think I mentioned this in my first post as it had only happened once then but is every time now. When I go to run the "lvchange -ay pve/data" command I'll get "Activation of logical volume pve/data is prohibited while logical volume pve/data_tmeta is active” and/or “Activation of logical volume pve/data is prohibited while logical volume pve/data_tdata is active”

I have to deactivate the pve/data_tmeta and/or pve/data_tdata before I can activate the pve/data (lvchange -an pve/data_tmeta and lvchange -an pve/data_tdata before running lvchange -ay pve/data)

Don't know if this information will help solve this problem.
 

Fidor

New Member
Nov 9, 2021
1
0
1
43
A quick workaround:

Create a new systemd service:
Bash:
root@c30:~# cat /etc/systemd/system/lvm-fix.service
[Unit]
Description=Activate all VG volumes (fix)
Before=local-fs.target

[Service]
Type=oneshot
ExecStart=/usr/sbin/vgchange -ay

[Install]
WantedBy=multi-user.target
Enable it:
Bash:
systemctl daemon-reload
systemctl enable lvm-fix.service

Reboot the server
 

fiona

Proxmox Staff Member
Staff member
Aug 1, 2019
2,436
483
88
I created an upstream bug report for the issue. As I wasn't able to reproduce the issue myself, it would be great if you could subscribe to the issue and provide requested information to the LVM developers.
 
  • Like
Reactions: ShEV

fiona

Proxmox Staff Member
Staff member
Aug 1, 2019
2,436
483
88
It's not an LVM bug, but should rather be considered a bug in Proxmox VE's (and likely Debian's) init configuration/handling. What (likely) happens is that the thin_check during activation takes too long and pvscan is killed (see here for more information).

Another workaround besides the one suggested by @Fidor should be setting
Code:
thin_check_options = [ "-q", "--skip-mappings" ]
in your /etc/lvm/lvm.conf and running update-initramfs -u afterwards.

EDIT2: Upstream bug report in Debian

EDIT: The workaround from @Fidor doesn't seem to work when the partial LVs are active:
Code:
Activation of logical volume pve/data is prohibited while logical volume pve/data_tmeta is active.
It would require deactivating XYZ_tmeta and XYZ_tdata first.
 
Last edited:
  • Like
Reactions: elric and ShEV

ShEV

New Member
Oct 25, 2021
10
0
1
It's not an LVM bug, but should rather be considered a bug in Proxmox VE's (and likely Debian's) init configuration/handling. What (likely) happens is that the thin_check during activation takes too long and pvscan is killed (see here for more information).
Yes, I am following this discussion.

Another workaround besides the one suggested by @Fidor should be setting...
I will try to check soon
 

L1243

Member
Sep 28, 2020
34
7
13
23
It's not an LVM bug, but should rather be considered a bug in Proxmox VE's (and likely Debian's) init configuration/handling. What (likely) happens is that the thin_check during activation takes too long and pvscan is killed (see here for more information).

Another workaround besides the one suggested by @Fidor should be setting
Code:
thin_check_options = [ "-q", "--skip-mappings" ]
in your /etc/lvm/lvm.conf and running update-initramfs -u afterwards.


EDIT: The workaround from @Fidor doesn't seem to work when the partial LVs are active:
Code:
Activation of logical volume pve/data is prohibited while logical volume pve/data_tmeta is active.
It would require deactivating XYZ_tmeta and XYZ_tdata first.
Do I have to add it to a special section of lvm.conf?
 

L1243

Member
Sep 28, 2020
34
7
13
23
Now I updated to 5.13.19-2-pve
After that both workarounds arent working anymore.

Does anyone has an Idea?

Following error appears, even when I am using one of the workarounds
Activation of logical volume vm/vm is prohibited while logical volume vm/vm_tmeta is active.

SOLVED: By running these commands manually after a reboot

Code:
lvchange -a n lvm/lvm_tmeta
lvchange -a n lvm/lvm_tdata
lvchange -ay       (can take some time to run)
 
Last edited:
  • Like
Reactions: rolandmade

solarsparq

New Member
Feb 9, 2022
1
0
1
39
It's not an LVM bug, but should rather be considered a bug in Proxmox VE's (and likely Debian's) init configuration/handling. What (likely) happens is that the thin_check during activation takes too long and pvscan is killed (see here for more information).

Another workaround besides the one suggested by @Fidor should be setting
Code:
thin_check_options = [ "-q", "--skip-mappings" ]
in your /etc/lvm/lvm.conf and running update-initramfs -u afterwards.

EDIT2: Upstream bug report in Debian

EDIT: The workaround from @Fidor doesn't seem to work when the partial LVs are active:
Code:
Activation of logical volume pve/data is prohibited while logical volume pve/data_tmeta is active.
It would require deactivating XYZ_tmeta and XYZ_tdata first.

I have a HP DL360p Gen8 with 1x Samsung 860EVO, 1x Seagate Firecuda, 1x Seagate Ironwolf, & 1x WD Red that was displaying this symptom. The Ironwolf HDD is the one it specifically hung up on every single time. I am running a fresh install of Proxmox 7.1-7 with new disks. I was running Kernel 5.13.X prior, but I opted-in for Kernel 5.15. The lvm.conf edit worked perfect in my scenario. lv.ironwolf.hdd no longer hangs & fails. Running Linux 5.15.19-1-pve now. Thanks for the hypervisor & contributions everyone. Likely HP-only problem based on my forum research on this topic.

Code:
Feb  9 14:23:01 creeperhost002 pvestatd[1641]: activating LV 'lv.ironwolf.hdd/lv.ironwolf.hdd' failed:   Activation of logical volume lv.ironwolf.hdd/lv.ironwolf.hdd is prohibited while logical volume lv.ironwolf.hdd/lv.ironwolf.hdd_tmeta is active.
Feb  9 14:23:02 creeperhost002 pve-guests[1773]: activating LV 'lv.ironwolf.hdd/lv.ironwolf.hdd' failed:   Activation of logical volume lv.ironwolf.hdd/lv.ironwolf.hdd is prohibited while logical volume lv.ironwolf.hdd/lv.ironwolf.hdd_tmeta is active.
 

ledufakademy

Member
Sep 10, 2021
49
1
8
48
Just seing , grub boot with proxmox VE .
then initramFS ... after that :
blinking top left cursor , what are you doing with proxmox VE ?
when finally achieve to WebGui : can't see my LVM storage , does it meens promox kill all our datas ?

GREAT !!!

activating LV 'RAID6-14TB/RAID6-14TB' failed: Activation of logical volume RAID6-14TB/RAID6-14TB is prohibited while logical volume RAID6-14TB/RAID6-14TB_tmeta is active. (500)
 

Krypty

New Member
Jul 28, 2021
6
1
1
37
It's not an LVM bug, but should rather be considered a bug in Proxmox VE's (and likely Debian's) init configuration/handling. What (likely) happens is that the thin_check during activation takes too long and pvscan is killed (see here for more information).

Another workaround besides the one suggested by @Fidor should be setting
Code:
thin_check_options = [ "-q", "--skip-mappings" ]
in your /etc/lvm/lvm.conf and running update-initramfs -u afterwards.

EDIT2: Upstream bug report in Debian

EDIT: The workaround from @Fidor doesn't seem to work when the partial LVs are active:
Code:
Activation of logical volume pve/data is prohibited while logical volume pve/data_tmeta is active.
It would require deactivating XYZ_tmeta and XYZ_tdata first.

Just wanted to share that setting thin_check_options as suggested by Fabian_E worked for me. I have two identical 12TB disks, and for whatever reason one of them will not activate after a reboot of the Proxmox server. I used to have to do the lvchange -an/lvchange -ay method but thin_check_options made this a non-issue.

Hopefully there's no real downside to this.
 
  • Like
Reactions: rolandmade

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get your own in 60 seconds.

Buy now!