Help pve 2.6.24-10-pve stopped booting after restart from web interface

webcoaster

New Member
Apr 21, 2009
3
0
1
Hi Everyone,

Been reading through the forums for the past 3 hours and can't seem to locate a solution to our boot failure issue.

I'm running PVE kernel 2.6.24-10-pve and I can see that the 2 logical volumes group pve are active during the boot process but the waiting for root file system takes for every then fails to load root system.

How would I get this corrected based on the following boot messages, can I recover or repair the disk somehow.

-Cheers

During the final portion of this I get the following on the screen.

Waiting for root file system ... done (this takes 90+ seconds to show on the screen)
Running /scripts/local-premount ... done
mounting /dev/mapper/pve-root on root failed: No such device
Running /scripts/local-bottom .. done
done.
Running /scripts/init-bottom ... mount:mounting /dev on /root/dev failed: No such file or directory
mounting /sys on /root/sys failed: No such file or directory
mounting /proc on root/proc failed: No such file or directory
Target filesystem doesn't have /sbin/init
No init found. Try passing init=bootarg.

Then BusyBox v1.10.2 comes up and then I see
/bin/sh: can't access tty; job control turned off
(initramfs) prompt is now showing on the screen
 
Hi,
do you have an standard-installation? Because /boot is found but the pve-vg (or better say pve-root lv) not. Normaly they are on the same disk (sda1: boot, sda2: pve-vg).

You can try to boot an live-distro like grml (good for maintenance). After boot you can look for disks, lvm and so on.

BTW. the 2.6.24-er kernel is not realy support this days - if your host is up and running you should think about an upgrade.

Udo
 
Hi,

Thanks for your reply, yes standard installation kernel upgraded several times on the Grub there is 2.6.24-10-pve, 2.6.24-7-pve, 2.6.24-5-pve, 2.6.24-2-pve and memtest86+

But not the lastest yet once I recover what I need I will upgrade it to the latest stable version

I'm downloading grml 64 now then I'll review the disk on the troubled machine.

I also downloaded proxmox-1.4 and booted that in debug mood and then aborted installation to try and mount the drives but when I mounted sda1 it shows grub, kernels above plus it also shows 2.6.32-1 which grub was not showing during normal boot.

Did not notice that before, I saw this when I was writing this reply, do you think I could edit Grub and try to load 2.6.32-1 and if so any instructions on how do do this.

Also in closing when I tried to mount /dev/sda2 it said that the filesystem was type lvm2pv so I assume that's what we were looking for, maybe the drive is find and we have a kernel issue as Grub is not showing the 2.6.32-1 option when trying doing a standard reboot.

Any thoughts.
 
Hi,

Thanks for your reply, yes standard installation kernel upgraded several times on the Grub there is 2.6.24-10-pve, 2.6.24-7-pve, 2.6.24-5-pve, 2.6.24-2-pve and memtest86+

But not the lastest yet once I recover what I need I will upgrade it to the latest stable version

I'm downloading grml 64 now then I'll review the disk on the troubled machine.

I also downloaded proxmox-1.4 and booted that in debug mood and then aborted installation to try and mount the drives but when I mounted sda1 it shows grub, kernels above plus it also shows 2.6.32-1 which grub was not showing during normal boot.

Did not notice that before, I saw this when I was writing this reply, do you think I could edit Grub and try to load 2.6.32-1 and if so any instructions on how do do this.

Also in closing when I tried to mount /dev/sda2 it said that the filesystem was type lvm2pv so I assume that's what we were looking for, maybe the drive is find and we have a kernel issue as Grub is not showing the 2.6.32-1 option when trying doing a standard reboot.

Any thoughts.
Hi,
grub shows only kernels which are in /boot/grub/menu.lst - not all which are on /boot.
That you try to mount sda2 shows that you are not familiar with lvm. lvm is'nt a filesystem which is mountable - it's provide space for logical volumes, which can have an filesystem. An logical volume is a little bit like a disk-partition.
If you boot grml use "Start-lvm" (or something similiar - it's shown in the help-text during boot).
After that looks with following commands:
pvdisplay
vgdiplay
lvdisplay

If you see /dev/pve/root - this can you mount (and perhaps make a fsck before).
You can also mount /dev/pve/data (which is on pve mounted on /var/lib/vz).

If you can mount both - copy first all of your VMs (also the config) to an backup-disk - so you can later install an fresh pve (1.9) if you aren't able to repair your installation.

Udo
 
Solved,

Booted system with grml and then ran

Start lvm2

then ran fsck on the root drive that needed repaired, when lvm2 started it will tell you the device name.

Root drive was the cause and fsck repaired it.

Thanks for your help Udo will now start the upgrade process

-Cheers