PVE on Dell 1855 blade

bshowalter

Renowned Member
Oct 14, 2008
6
0
66
I've been trying to install PVE 0.9beta2 on a Dell 1855 blade. The install appears to go smoothly, however, after rebooting into the new install I get a series of messages:

Code:
mount: mounting /dev/pve/root on /mnt failed: No such file or directory
mount: mounting /dev/pve/root on /mnt failed: No such file or directory
mount: mounting /dev/pve/root on /mnt failed: No such file or directory
Testing again in 5 seconds
After about four of these stanzas, the final message is:

Code:
switch_root: bad newroot /mnt
Kernel panic - not syncing: Attempted to kill init!
Any ideas what's going on here?
 
what disk system do you use, do the blade have internal disks?
 
what disk system do you use, do the blade have internal disks?
Here's the lspci output in regard to the disk controller:

04:04.0 SCSI storage controller: LSI Logic / Symbios Logic 53c1030 PCI-X Fusion-MPT Dual Ultra320 SCSI (rev 08)

Yes, there are two 73GB disks in the blades, in a hardware RAID1 configuration.
 
Here's the lspci output in regard to the disk controller:

04:04.0 SCSI storage controller: LSI Logic / Symbios Logic 53c1030 PCI-X Fusion-MPT Dual Ultra320 SCSI (rev 08)

Yes, there are two 73GB disks in the blades, in a hardware RAID1 configuration.

can you test the new version - Proxmox VE 1.0 - available next week?
 
problem still exists with v1.0

We're using a Asus M2N-E SLI with ICP GDT8546 Raid-Controller. 2GB RAM works fine, 4GB don't.
 
We're using a Asus M2N-E SLI with ICP GDT8546 Raid-Controller. 2GB RAM works fine, 4GB don't.

just to understand your issue: if you remove RAM everything is working, if you plugin 4 GB you got problems?

never heard such issues, is there a bios update for your board available?

we tested 1.0 with system form 2 GB up to 8 GB without any issues like this.
 
1)
Exactly. With 4GB the system stops booting with the same strange behavior posted by bshowalter. With 2GB the system runs smoothly.

2)
No newer Bios availiable.

3)
Nevertheless the Proxmox-Team did a great job.
 
Does it try to load the mptbase and mptscsih modules? Any error messages displayed during boot?

- Dietmar
 
Ok, it should load driver 'mptspi', which then loads 'mptbase' and 'mptscsi.h'. Any info about that on the boot console?
 
little bit difficult to analyse logs with kernel-panic. Meanwhile we'll check different mainboards and controllers.
 
You can try to edit the grub boot parameter at the boot prompt. Simply adding 'proxdebug' gives you a console after startup before trying to mount root.

Code:
kernel /boot/vmlinuz-2.6.24-1-pve root=/dev/pve/root ro proxdebug
 
Mounting root problems

I have had the same problem on sun x2200M2 with Adaptec RAID controller (AACRAID). The problem is in init script in the initrd image. It does not wait long enough for SCSI devices before trying to map the LVM volumes. I have fixed it by quite simple fix. See attachment for fixed initrd (I have also fixed some typos there). Feel free to use this version (you can unpack the initrd.img with cpio and gzip and pack it back with a following code (assuming that you unpacked it into initrd directory):
Code:
#!/bin/sh

BASE=initrd

(cd $BASE;  find . | cpio -o -H newc|gzip -9) > initrd.img
And while we are at it. To people at proxmox. You have a great product and your work is really appreciated. But...
The compilation of the kernel and regeneration of the initrd is really hard. I understand that you are trying to discourage people from running modified kernels but you are overdoing it. There is a ton of dependencies for the compilation and they are not listed. I have hunted for some three hours for them and finally gave up. Just try to build the package on the fresh installation of PVE and you will see what I am talking about.
And BTW I have got around the root mounting problem initially by using ubuntu openvz kernel and initrd. It worked quite well.
 

Attachments

  • init.zip
    1.8 KB · Views: 15
I have had the same problem on sun x2200M2 with Adaptec RAID controller (AACRAID). The problem is in init script in the initrd image. It does not wait long enough for SCSI devices before trying to map the LVM volumes. I have fixed it by quite simple fix. See attachment for fixed initrd (I have also fixed some typos there).

I guess "modprobe scsi_wait_scan" is the better fix. I will add that to the next kernel release.

To people at proxmox. You have a great product and your work is really appreciated. But...
The compilation of the kernel and regeneration of the initrd is really hard. I understand that you are trying to discourage people from running modified kernels but you are overdoing it. There is a ton of dependencies for the compilation and they are not listed.

We have the same dependencies than any other kernel source package - or do I miss something?

- Dietmar
 
I guess "modprobe scsi_wait_scan" is the better fix. I will add that to the next kernel release.



We have the same dependencies than any other kernel source package - or do I miss something?

- Dietmar
The dependencies are not listed anywhere I can find them. I think I have them almost all but the compilation stopped on the kvm75 with being unable to create some doc files (kvm.1 if I am not mistaken).
 
Try the following:

# apt-get install build-essential git-core autotools-dev debhelper fakeroot libsdl-dev lintian

I will add the dependencies to the README for the next kernel release (we will release it this week).
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!