Upgrade to 1.6 and new Kernel - Server can't find volumes

so i tried the initramfs-tools from the backports nothing changed

but i checked the dmesg and see that sda is sdb and sdb is sda

but in /etc/fstab there is the uuid of the boot partition:
Code:
/dev/pve/root / ext3 errors=remount-ro 0 1
/dev/pve/data /var/lib/vz ext3 defaults 0 1
UUID=0ecb5fcb-74f1-41ac-9c6c-bfdeb49461d0 /boot ext3 defaults 0 1
/dev/pve/swap none swap sw 0 0
/dev/sdb5    /backup    ext3    defaults 0    1
proc /proc proc defaults 0 0
in the 2.6.24er kernel here are the uuids:
Code:
/dev/sda1: UUID="0ecb5fcb-74f1-41ac-9c6c-bfdeb49461d0" TYPE="ext3" SEC_TYPE="ext2" 
/dev/sda2: UUID="ujeeBt-N4wj-6UKK-MHa0-rNc9-EOHL-4qoDd9" TYPE="lvm2pv" 
/dev/sdb5: UUID="44ba2f49-056b-497a-a8ec-0ff6f19a3fd1" TYPE="ext3" 
/dev/dm-0: TYPE="swap" UUID="1fb09446-5309-4d60-8b40-4e17f77279bd" 
/dev/dm-1: UUID="5f1d5256-f370-45bd-91c8-31ddcef654e9" TYPE="ext3" 
/dev/dm-2: UUID="4f89cf4a-8de3-4254-99b3-3908bed2caae" TYPE="ext3"
so what is to do that the right device is booted or that the lvm devices will work.

With this uuid in the /etc/fstab i can boot the 2.6.24 without problems.

my next idea is to change the partitiondevice of the second controller also to uuid

could there be any problems with the grub configs?
 
Last edited:
i think i know how to get this to working

with inserting the modules of the raid controllers in the /etc/modules in the right order the devices should have the right device names
 
I am having the same problem. I just upgraded from 1.5 to 1.6 and I get the "/dev/mapper/pve-root does not exist" error on boot up. I ran the lvm command at the (initramfs) prompt and then ran lvscan. It couldn't find any lvm's. Then I ran vgscan and it found my volume groups. After that I ran lvscan again, and it found my logical volumes. I still don't know how to get it to boot at this point though.

Does anyone have any advice to offer?

Julian

I've just upgraded a server from proxmox 1.5 to 1.6 and I'm having the same error.

Code:
device-mapper: ioctl: 4.15.0-ioctl (2009-04-01) initialised: [EMAIL="dm-devel@redhat.com"]dm-devel@redhat.com[/EMAIL]
  Volume groupe "pve" not found
done.
From your comments, at the initramfs prompt, I did:

Code:
(initramfs) lvm
lvm> vgscan
  Reading all physical volumes. This may take a while...
  Found volume group "pve" using metadata type lvm2
lvm> lvscan
  inactive        '/dev/pve/swap' [15.00 GB] inherit
  inactive        '/dev/pve/root' [34.00 GB] inherit
  inactive        '/dev/pve/data' [213.00 GB] inherit
lvm> lvchange -a y /dev/pve
lvm> lvscan
  ACTIVE          '/dev/pve/swap' [15.00 GB] inherit
  ACTIVE          '/dev/pve/root' [34.00 GB] inherit
  ACTIVE          '/dev/pve/data' [213.00 GB] inherit
lvm> quit
  Exiting.
(initramfs) CTRL+D
And the server continue to boot correctly and everything works as it should.
If I reboot the server, I still get the error "Volume groupe "pve" not found"
 
Last edited:
put the modules to load in the right order in the /etc/initramfs-tools/modules file

to find out which modules are used for storage devices see lsmod

then update kernel ramdisk

update-initramfs -u -k all

now the devices should be reordered
 
put the modules to load in the right order in the /etc/initramfs-tools/modules file

I don't understand why this would help. It seems to me that the modules are already loaded but the system isn't activating the lvm's.

Is LVM running before the Intel SCSI controller module is loaded?
 
with the definition which module is loaded first i can tell the kernel which storage controller get which device name

for exampe

lsi megaraid --> sda
adaptec --> sdb

the real problem is that the devices are renamed

sda is sdb an so on

with this trick i could boot without problems
 
Below is my /etc/fstab file. As you can see I am using disk labels to identify my drives so which drive is assigned sda or sdb doesn't matter.

Code:
proxmox:~# cat /etc/fstab
# <file system> <mount point> <type> <options> <dump> <pass>
/dev/pve/root / ext3 errors=remount-ro 0 1
/dev/pve/data /var/lib/vz ext3 defaults 0 1
LABEL=/boot /boot ext3 defaults 0 1
LABEL=vmware /var/lib/vz/old_vmware xfs defaults 0 1
/dev/pve/swap none swap sw 0 0
proc /proc proc defaults 0 0
 
This look like a race condition. I am just guessing, and changed some kernel setting
Code:
 CONFIG_BLK_DEV_DAC960=m
 CONFIG_BLK_DEV_UMEM=m
 # CONFIG_BLK_DEV_COW_COMMON is not set
-CONFIG_BLK_DEV_LOOP=m
+CONFIG_BLK_DEV_LOOP=y
 # CONFIG_BLK_DEV_CRYPTOLOOP is not set
 CONFIG_BLK_DEV_DRBD=m
-CONFIG_BLK_DEV_SD=m
+CONFIG_BLK_DEV_SD=y
-CONFIG_BLK_DEV_SR=m
+CONFIG_BLK_DEV_SR=y
 CONFIG_BLK_DEV_SR_VENDOR=y
-CONFIG_BLK_DEV_DM=m
+CONFIG_BLK_DEV_DM=y

It would be great if someone can test if that helps:

# wget ftp://download.proxmox.com/debian/d...4/pve-kernel-2.6.32-3-pve_2.6.32-14_amd64.deb
# dpkg -i pve-kernel-2.6.32-3-pve_2.6.32-14_amd64.deb
 
The new kernel is working for me as well. When will this update make it into the repository, or has it already?

Thanks for the great support!
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!