node down after upgrade

Erwin123

Member
May 14, 2008
207
0
16
Tonight I upgraded all our nodes from pm 1.3 to 1.8 an kernel 2.24 to 2.18

One of the nodes that contain all our backups won't boot anymore.
BEcause of the large disksize it has two lun's. No matter wich I select in the bios it won;t boot.
I just get a black screen with a cursor blinking.

I tried to boot from cd and typed debug, but I have no idea what I can do there to get things running.

The weird thing is we had problems with all our servers with usb drives. They booted fine with 1.3 and the old kernel, but to be able to boot with the new kernel I need to detach them physicly from the servers. Not sure if thats related with this problem though.

Is there anything I can do, or do I have to consider all backups lost and reinstall?
Thanks!

p.s. will shoot a ticket with the proxmox team (we do have kvm access) but I understand they won't be handling those before money, and I'd rather have this fixed as soon as possible.
 
Tonight I upgraded all our nodes from pm 1.3 to 1.8 an kernel 2.24 to 2.18

One of the nodes that contain all our backups won't boot anymore.
BEcause of the large disksize it has two lun's. No matter wich I select in the bios it won;t boot.
I just get a black screen with a cursor blinking...
Hi Erwin,
this looks for an grub-problem (disk-order or something else).
You can try to boot an live-cd and run grub-install (chroot, or the right root-parameter). You can also look at the fstab - in old installations the /boot-partition was assigned to the device (like /dev/sda1). You can switch to UUID...

Udo
 
Hi Udo,

Thanks.
We eventually solved it.
I already used UUID's after prvious advice from you, this was not the problem.

We booted with proxmox debug.
Then tried to reinstall grub. After that we got the menu upon boot instead of just the cursor, but the system would not load any kernel because of 'missing or damaged partitiontables'.
After we changed hd1,0 to hd0,0 in grub things started working a bit better. Some more fiddeling with fstab and all is running well now.
I have no idea how this mess was created by just updating (actually downgrading) the kernel.
Must be something exotic in the Bios of our Supermicro servers.
 
probably you added the second raid volume and you never did a reboot? I mean you "introduced" this problem with adding the second raid volume and not with the kernel upgrade. this could explain the grub issue.
 
Thats probably it indeed.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!