PVE not found ...

PaulVM

Renowned Member
May 24, 2011
102
3
83
Trying to boot I obtain:

Code:
   Volume group "pve" not found
  Cannot process volume group pve
Unable to find LVM volume pve/root
Gave up waiting for root device. Common problems:
 - Boot args (cat /proc/cmdline)
   - Check rootdelay= (did the system wait long enough?)
   - Check root= (did the system wait for the right device?)
 - Missing nodules (cat /proc/modules; Is /dev)
ALERT!, /dev/mapper/pve-root does not exist. Dropping to a shell!
modprobe: module ehci-orion not found in modules.dep

BusyBox v1.22.1 (Debian 1:1.22.0-9+deb8u1) built-in shell (ash)
Enter 'help' for a list of but it-in commands.
/bin/sh: can't access tty; job control turned off
(initramfs)


Proxmox 4.x
Old HP Proliant ML 350 G6
HP Smart Array P410i Controller
2 SATA HDD 2 TB, RAID1
System slow, moving the disks to another PC I can see one of them have many errors and was probably failing
Reinserted the good disk into the server, added a new one and started RAID 1 rebuild. No problems.
Proxmox boot as usual
After 5 days the rebuild was still in progress so I moved the disks to the previously used PC to check if they are still good and which is the situation.
I found the old good disk in the same state, the new one partillay rebuilt.

After restoring the disks (same place/order/slot), into the server, if I try to boot the Proxmox installation I get the error above.
Using a rescue disk I can access the volumes if I connect the disk to the PC, but cant see them into the server.
How can I restore the volumes without reinstall?

Thanks, P.
 
Update: If I connect the disk to a standard PC, the Proxmox boot whitout problems (except for nic configuration ...).
Attached to the controller I always have the usual result:

P_20170810_160731a.jpg
 
try adding a rootdelay of 30 to the kernel command line (e.g., by editing the grub entry at boot and adding 'rootdelay=30' to the line starting with linux, then boot).
 
Already tried. Added 'rootdelay=30' and tried also 5,10.20,100. Same result :-(
If I connect the old defective disk (originally I had 2 disks in RAID-1 with one that present errors), the system boots normally.
If I connect the old "good disk" to a PC it boots normally.
Also the cloned disks tha I did for testing and safety.
The only not working configuration is the "good disk" into the server :-(
That obvioulsy is what I need.

I was thinking to get some LVM config info from the "old defective disk" and put them to the "good disk"
But comparing result of vgcfgbackup on the server starting with the "old defective disk" and the PC starting with the "good disk", I don't see any difference (escluding date and name).

Any hints appreciated.

Thanks, P.

P.S: considered also the option of cloning the "old defective disk", but I don't know if it can survive to this and/or if attaching it to another PC/controller give me the same problem
 
what do lsblk, pvs, vgs, and lvs report in the initramfs shell?
 
what do lsblk, pvs, vgs, and lvs report in the initramfs shell?

Nothing :-(
P_20170811_145127a.jpg

Booting with a rescue CD I get:

Code:
root@sysresccd /root % lsblk
NAME   MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
sda      8:0    0   1.8T  0 disk
sda1   8:1    0   1.8T  0 part
sdb      8:16   0   1.8T  0 disk
sdc      8:32   0 136.7G  0 disk
sdc1   8:33   0 136.7G  0 part
sdd      8:48   1   3.8G  0 disk
sdd1   8:49   1   3.8G  0 part /mnt/windows
sr0     11:0    1 448.8M  0 rom  /livemnt/boot
loop0    7:0    0 328.1M  1 loop /livemnt/squashfs


root@sysresccd /root % lsblk -o NAME,UUID
NAME   UUID
sda
sda1 9007a868-279e-40b4-a671-441375b2d777
sdb
sdc
sdc1 1969fca8-7887-46e7-b334-f12558046106
sdd
sdd1 48CD-996E
sr0
loop0


root@sysresccd /root % pvs
 No device found for PV J8VD2F-pi74-MxH3-nnet-L8oE-iWVk-UPk7OI.
root@sysresccd /root % lvs
 No device found for PV J8VD2F-pi74-MxH3-nnet-L8oE-iWVk-UPk7OI.
 No volume groups found
root@sysresccd /root % vgs
 No device found for PV J8VD2F-pi74-MxH3-nnet-L8oE-iWVk-UPk7OI.
 No volume groups found



root@sysresccd /root % fdisk -l
Disk /dev/loop0: 328.1 MiB, 344010752 bytes, 671896 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/sda: 1.8 TiB, 2000398934016 bytes, 3907029168 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: dos
Disk identifier: 0xeaab10e2

Device     Boot Start        End    Sectors  Size Id Type
/dev/sda1        2048 3907029167 3907027120  1.8T 83 Linux


The primary GPT table is corrupt, but the backup appears OK, so that will be used.
Disk /dev/sdb: 1.8 TiB, 2000365379584 bytes, 3906963632 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: F6B2E5DA-80AF-42D5-B865-732AD03DA486

Device      Start        End    Sectors  Size Type
/dev/sdb1      34       2047       2014 1007K BIOS boot
/dev/sdb2    2048     262143     260096  127M EFI System
/dev/sdb3  262144 3906963598 3906701455  1.8T Linux LVM


Disk /dev/sdc: 136.7 GiB, 146778685440 bytes, 286677120 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0xb28d3c5b

Device     Boot Start       End   Sectors   Size Id Type
/dev/sdc1          63 286663859 286663797 136.7G 83 Linux

Obviously the interested disk is /DEV/SDB

Thanks, P.


P.S.: This is the original LVM config:
Code:
# Generated by LVM2 version 2.02.116(2) (2015-01-30): Tue Jun 20 15:37:20 2017

contents = "Text Format Volume Group"
version = 1

description = "Created *after* executing 'vgcfgbackup'"

creation_host = "lab-vm"    # Linux lab-vm 4.2.6-1-pve #1 SMP Wed Dec 9 10:49:55 CET 2015 x86_64
creation_time = 1497965840    # Tue Jun 20 15:37:20 2017

pve {
    id = "O6cZv2-50TF-sn1g-emxX-vfFc-Z0s7-r2oGmA"
    seqno = 4
    format = "lvm2"            # informational
    status = ["RESIZEABLE", "READ", "WRITE"]
    flags = []
    extent_size = 8192        # 4 Megabytes
    max_lv = 0
    max_pv = 0
    metadata_copies = 0

    physical_volumes {

        pv0 {
            id = "pWLcNd-CGko-k87N-js9H-cK2V-762S-fu1khi"
            device = "/dev/sdb3"    # Hint only

            status = ["ALLOCATABLE"]
            flags = []
            dev_size = 3906701455    # 1.8192 Terabytes
            pe_start = 2048
            pe_count = 476892    # 1.8192 Terabytes
        }
    }

    logical_volumes {

        swap {
            id = "jnOz8R-ZDkn-duwG-Emln-jBr2-z942-7W0Vpj"
            status = ["READ", "WRITE", "VISIBLE"]
            flags = []
            creation_host = "proxmox"
            creation_time = 1455819782    # 2016-02-18 19:23:02 +0100
            segment_count = 1

            segment1 {
                start_extent = 0
                extent_count = 1024    # 4 Gigabytes

                type = "striped"
                stripe_count = 1    # linear

                stripes = [
                    "pv0", 0
                ]
            }
        }

        root {
            id = "euklLT-nvAw-Ummq-PQcH-bxeQ-rJKn-XVF3pn"
            status = ["READ", "WRITE", "VISIBLE"]
            flags = []
            creation_host = "proxmox"
            creation_time = 1455819782    # 2016-02-18 19:23:02 +0100
            segment_count = 1

            segment1 {
                start_extent = 0
                extent_count = 25344    # 99 Gigabytes

                type = "striped"
                stripe_count = 1    # linear

                stripes = [
                    "pv0", 1024
                ]
            }
        }

        data {
            id = "uIArm0-gPdR-cNs3-FWLy-Mu0y-p8Fz-BebaZH"
            status = ["READ", "WRITE", "VISIBLE"]
            flags = []
            creation_host = "proxmox"
            creation_time = 1455819782    # 2016-02-18 19:23:02 +0100
            segment_count = 1

            segment1 {
                start_extent = 0
                extent_count = 446429    # 1.70299 Terabytes

                type = "striped"
                stripe_count = 1    # linear

                stripes = [
                    "pv0", 26368
                ]
            }
        }
    }
}
 
Last edited:
I am going to fix my problem changing the topology of the disks, but still interested to hints that allow me to understand why the RAID controller blocks the boot and the pve management.
And obviousy how to eventually fix them in a clean manner.

Regards, P.
 
Does the P410i controller have a battery you can replace? Or is it integrated? Also do you have 2 identical spare drives to test fresh install of ProxMox in the same RAID1, to see if it boots or show same issues? If it has issues with 2 spare discs in the same config after fresh install, it could be the controller or motherboard. Do you have any PCI-E SATA controller cards you could try adding on the G6?
 
The P410i controller is in basic configuration with zero cache battery. There is a slot to add the cache, but this wasn't the problem.
Not having cache only limits to max 2 defined array and for about 5 years it had worked fine before with Proxmox 2.x and then with ProxMox 4.x using a couple of RAID-1 array (2+2 disks, system and internal backup).
As I already wrote, If I attach one of the boot disks (or a clone), to another PC or to the standard controller integrated into the server's motherboard it boots as expected.
Now I have simply moved the boot disk to the MB integrated controller, added a couple of new disks to the P410i controller and the server is working without problem.
N.B.: the other raid 1 array (that is used for internal backup), had always worked normally.
The new created array is fine.
The only problem was that when the disk is accessed throught the controller, the LVM volumes aren't recognized.
Probably there is some informations losts that not having direct access to the disk can't allow the boot process.
I am not a LVM expert, but I suppose there are specific commands to check/repair the volume pointers (that I think is the problem).

Regards, P.
 
Last edited:
  • Like
Reactions: GadgetPig

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!