I don't think I mentioned this in my first post as it had only happened once then but is every time now. When I go to run the "lvchange -ay pve/data" command I'll get "Activation of logical volume pve/data is prohibited while logical volume pve/data_tmeta is active” and/or “Activation of...
I won't be able to do the debugging until next week on the production servers with the problem however I did do the debug on one of the test servers that is running PVE 7 and kernel 5.11.22-5 but not having the LVM issue. I compared the debug file from our working test server to ShEV's debug...
Yes, both servers are showing the same symptoms. One is a HP Proliant Gen 9 Xeon server and the other is a HP Z620 Workstation. They have both been nuked and paved so yes the thin pools were re-created too. I rebooted the HP Z620 Workstation today so I could run the requested lvs -a and...
Could this be an issue between HP and Proxmox 7? Although, these servers worked fine with Proxmox 6. All of our test PVEs are on various Dell Hardware using the no-subscription Package Repository and we haven't had an issue with them. This was the reason I went ahead and upgraded two of our 5...
Do you want the output of lvs -a and lvdisplay pve/data -vvv while the error is occurring or after I activate the logical volumes manually? Activating the logical volumes manually has been the only way that I have been able to get the VMs on the PVE to start. This is a production server so I...
Hi Fabian
I have attached the syslog and lvm.conf. The only thing that I noted in the syslog was the following line:
Oct 12 06:50:27 pve lvm[825]: pvscan[825] VG pve skip autoactivation.
Here is the output of pvevision -v
pveversion -v
proxmox-ve: 7.0-2 (running kernel: 5.11.22-5-pve)...
Updated our Proxmox PVE 7 server this morning and upon reboot the local-lvm was not available and VM's would not start. Below are the updates applied:
libpve-common-perl: 7.0-6 ==> 7.0-9
pve-container: 4.0-9 ==> 4.0-10
pve-kernel-helper: 7.0-7 ==> 7.1-2
qemu-server: 7.0-13 ==> 7.0-14...
We were getting this on snapshots that we were using a script to take. We had to add a sleep function between each qm command to resolve this issue. With backups it doesn't happen often but was suggested to switch from Stop backup to snapshot backup type.
Clone of VM gets to 100% then I get the following error:
TASK ERROR: clone failed: command '/sbin/lvs --separator : --noheadings --units b --unbuffered --nosuffix --options lv_size /dev/pve/vm-201017-disk-0' failed: got timeout
PVE version is 6.2.12
Anyone have a resolution?
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.