Proxmox 4 on Jessie - LVM Problem

tomte76

Member
Mar 6, 2015
26
0
21
Hi,

I've a strange problem regarding my Proxmox 4 installation. First I tried to upgrade my Proxmox 3 server to Proxmox 4 following the instructions on the wiki. Everything worked fine until the reboot with the proxmox kernel. Afterwards the system didn't come up again. It's stuck in the initramfs sequence while activating the LVs from my volume group. wondrously it can activate the LV for / and also the LV for SWAP but then it gets stuck on the usr-LV. If I start lvm in the initramfs busybox an do a vgchange -ay I get all LVs enabled and if I leave the shell the system boots up and works ok. After messing around for I while I decided to reinstall. So I did a clean Debian Jessie installation, MD-Raid5, LVM on top. I had no issues during installation. Also the stock Debian Jessie was able to boot up after installation. Then I installed Proxmox 4 on-top, following the instructions on the wiki. With exactly the same result. The system won't boot up with the proxmox kernel. It gets stuck after the LV for swap. Everything worked nice with Proxmox 3 and it also seems to work ok after manually activating the LVs. I can start, create, clone and remove VM.

This is a screenshot when it gets stuck. Any ideas what's going wrong? Thank you.

IMG_1479.jpg
 
Hi Tom

Can you please put the following content in /etc/initramfs-tools/scripts/local-premount/lvm

#!/bin/sh
echo "Proxmox VE: force detection of logical volumes"
/sbin/vgchange -ay


then make the file executable with chmod +x /etc/initramfs-tools/scripts/local-premount/lvm

then call update-initramfs -u and reboot
 
  • Like
Reactions: chrone
Hello we've the same issue here, luckily I was able to find this thread before rebooting, so the script prevent issue.

the system was upgraded from pve3 to pve4 .

the following is just an annoying issue, the system still works.

when update-initramfs -u is run:
Code:
update-initramfs -u 
update-initramfs: Generating /boot/initrd.img-4.2.3-2-pve
  Found duplicate PV koDfqbKssjuOdPQ64opbxIuX6SHfzXPq: using /dev/zd144p5 not /dev/zd320p5
  device-mapper: create ioctl on turnkey-root failed: Device or resource busy
  device-mapper: create ioctl on turnkey-swap_1 failed: Device or resource busy

and the turnkey lvm can not be deleted - at least I could not figure out how to do it. there is no /dev/turnkey like a normal lvm [ /dev/pve ]. so lvremove won't work.

any suggestions to remove the turnkey lvm?

best regards, Rob Fantini
 
Hello we've the same issue here, luckily I was able to find this thread before rebooting, so the script prevent issue.

the system was upgraded from pve3 to pve4 .

the following is just an annoying issue, the system still works.

when update-initramfs -u is run:
Code:
update-initramfs -u 
update-initramfs: Generating /boot/initrd.img-4.2.3-2-pve
  Found duplicate PV koDfqbKssjuOdPQ64opbxIuX6SHfzXPq: using /dev/zd144p5 not /dev/zd320p5
  device-mapper: create ioctl on turnkey-root failed: Device or resource busy
  device-mapper: create ioctl on turnkey-swap_1 failed: Device or resource busy

and the turnkey lvm can not be deleted - at least I could not figure out how to do it. there is no /dev/turnkey like a normal lvm [ /dev/pve ]. so lvremove won't work.

any suggestions to remove the turnkey lvm?

best regards, Rob Fantini
Hi Rob,
looks that the host see the lvm inside an VM (which is cloned - so you have two with the same ID).
To prevent this, use the filter inside lvm.conf.

Udo
 
Hi Rob,
looks that the host see the lvm inside an VM (which is cloned - so you have two with the same ID).
To prevent this, use the filter inside lvm.conf.

Udo
Hello Udo,

I had just one unused lxc on the system, I deleted it and still get this:
Code:
sys7  /var/lib/vz # update-initramfs -u 
update-initramfs: Generating /boot/initrd.img-4.2.3-2-pve
  Volume group "turnkey" not found
  Cannot process volume group turnkey
  Volume group "turnkey" not found
  Cannot process volume group turnkey
  device-mapper: create ioctl on turnkey-root failed: Device or resource busy
  device-mapper: create ioctl on turnkey-swap_1 failed: Device or resource busy
sys7  /var/lib/vz # pct list
sys7  /var/lib/vz # qm list
sys7  /var/lib/vz # date
Wed Nov 18 14:07:05 EST 2015

I have concluded it is not worth the trouble to upgrade from pve3 to pve4 . Other things are off. for instance 'ifdown eth2 ' does not work when it should.
 
@Baptiste M : on further researching the proper way to fix is to to add a boot parameter to the kernel boot options:
rootdelay=10

this will make the wait longer for the root device to be available
 
Hi Tom

Can you please put the following content in /etc/initramfs-tools/scripts/local-premount/lvm

#!/bin/sh
echo "Proxmox VE: force detection of logical volumes"
/sbin/vgchange -ay


then make the file executable with chmod +x /etc/initramfs-tools/scripts/local-premount/lvm

then call update-initramfs -u and reboot

---------------
Hello good day.

The same thing happens to me. Every time I restart Proxmox I check with lvdisplay that does not activate the VG and consequently no virtual machine with the hard disks on that partition works. It is configured by iSCSI against a FreeNAS with two disks in Raid1.

If I run the command vgchange -a and raid1vgpve manually everything works perfectly.

I have also tried to do what you indicate in this post configuring the script in the lvm file but it does not work.

Do you know any other way?

Thanks for everything.
A greeting.

I attached the output of the lvdisplay command:

oot@pve1:~# lvdisplay

--- Logical volume ---
LV Name data
VG Name raid1vgpve
LV UUID ckL724-MVMf-YL52-6RGx-cH1e-JLlW-3zhG3l
LV Write Access read/write
LV Creation host, time pve1, 2017-02-18 20:10:27 +0100
LV Pool metadata data_tmeta
LV Pool data data_tdata
LV Status NOT available
LV Size 879.00 GiB
Current LE 225024
Segments 1
Allocation inherit
Read ahead sectors auto

--- Logical volume ---
LV Path /dev/raid1vgpve/vm-100-disk-1
LV Name vm-100-disk-1
VG Name raid1vgpve
LV UUID zcV2tS-pffS-ueS4-w2q9-WBcz-bSuy-6QH5EQ
LV Write Access read/write
LV Creation host, time pve1, 2017-02-19 16:34:06 +0100
LV Pool name data
LV Status NOT available
LV Size 8.00 GiB
Current LE 2048
Segments 1
Allocation inherit
Read ahead sectors auto

--- Logical volume ---
LV Path /dev/raid1vgpve/vm-102-disk-1
LV Name vm-102-disk-1
VG Name raid1vgpve
LV UUID SQowA9-4MTg-yM5o-nmr6-Adwp-Zbrp-hOz1Ae
LV Write Access read/write
LV Creation host, time pve1, 2017-02-19 16:57:11 +0100
LV Pool name data
LV Status NOT available
LV Size 33.00 GiB
Current LE 8448
Segments 1
Allocation inherit
Read ahead sectors auto
 
Please try to add rootdelay=60 to your /etc/default/grub on the proxmox host, and call update-grub afterwards
the procedure is described in http://blog.wittchen.biz.pl/ubuntu-system-boot-problem/ ( #attempt1)

> It is configured by iSCSI against a FreeNAS with two disks in Raid1.
Do you mean the rootfs of the Proxmox host is on iscsi ? then I assume you have a BIOS which supports booting the OS over iscsi ?
 
Please try to add rootdelay=60 to your /etc/default/grub on the proxmox host, and call update-grub afterwards
the procedure is described in http://blog.wittchen.biz.pl/ubuntu-system-boot-problem/ ( #attempt1)

> It is configured by iSCSI against a FreeNAS with two disks in Raid1.
Do you mean the rootfs of the Proxmox host is on iscsi ? then I assume you have a BIOS which supports booting the OS over iscsi ?
---------------------

Hello Manu.
Thank you for replying so quickly.
I'm sorry to say this has not worked either. (# Attempt1).
I think I've expressed myself wrong.

Proxmox is installed directly on a server hard disk.
Then I set up FreeNAS with two 1TB hard drives in RAID1.
After RAID1 create a zvol in FreeNAS.
This allowed me to add this zvol to the iSCSI configuration in FreeNAS.
I assigned a LUN.
Then in Proxmox through the GUI create an iSCSI in the storage menu in the Node.
When you put the FreeNAS IP correctly saw the LUN.
Then I assigned an LVM to the LUN and so far everything perfect but if I wanted to snapshots in this volume I needed an LVM-Thin.
This step I had to do through the CLI following these steps: https://pve.proxmox.com/wiki/LVM2#LVM-Thin
Once done in the Proxmox GUI I was able to create the LVM-Thin.

When I saw this storage in the GUI I started configuring containers, placing the storage in this LVM-Thin working perfectly until I rebooted Proxmox appearing in the VG raid1vgpve all Logical Volumes in: LV Status NOT available preventing the containers from starting correctly Since they can not find storage.

But if I manually enter the command vgchange -a y and the containers are started correctly.

But even after changing the settings you recommended, I still do not activate the VG automatically after a restart the Node.

I hope I have been a little clearer and forgive the translation.
A greeting
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!