upgrade 1.3 -> 1.8 goes wrong

Erwin123

Member
May 14, 2008
207
1
16
I'm upgrading (luckily a test server) from PM 1.3 to 1.8 with kernel 2.6.32
I followed the steps on the dowload page.

The server uses a 3ware raidcard 2x sata.
All was running fine without problems before the reboot.

I can't make a screenshot so I post a foto of the error:


I searched the forum but did not find any solution.
Does anyone knows what goes wrong here and how to fix it?

This was a test because I want to upgrade our entire (hosting production) cluster so this erro gives me the creeps a bit ;)

Thanks in advance!
 
I'm upgrading (luckily a test server) from PM 1.3 to 1.8 with kernel 2.6.32
I followed the steps on the dowload page.

The server uses a 3ware raidcard 2x sata.
All was running fine without problems before the reboot.
...
I searched the forum but did not find any solution.
Does anyone knows what goes wrong here and how to fix it?

This was a test because I want to upgrade our entire (hosting production) cluster so this erro gives me the creeps a bit ;)

Thanks in advance!
Hi,
perhaps the devices was renamed? If i'm right, in pve1.3 the /boot partition was assigned in fstab directly with /dev/sda1. If your device changed to sdb /boot isn't reachable.

Can you boot with contol-d and look with "blkid" and "fdisk -l" - you can use the UUID instead of /dev/sda1 for /boot.
Look also in the bios-settings for the device-order.

Udo
 
Hi Udo,

Thank you for your help, much appreciated.
I understand I have to edit the fstab?
I did a fdisk -l and this was the output:



Does this tell me what to change /dev/sda1 to in the fstab?
 
I have no idea what happened, but I changed the boot order in the bios and let the raidcard boot first instead of the cdrom and that seems to solve it.
I don't understand why this works? It should have no influence?
What was changed by the upgrade that caused this behaviour?
 
I have no idea what happened, but I changed the boot order in the bios and let the raidcard boot first instead of the cdrom and that seems to solve it.
I don't understand why this works? It should have no influence?
What was changed by the upgrade that caused this behaviour?
Hi,
many things can change the device-order:
Bios, external-storage, raid-configs (which raidset is the first...).
The best way is to use the UUID (or lvm-names) instead of fixed device-entrys.
Look with "blkid /dev/sdb1" (or now "blkid /dev/sda1"?) for the UUID and use them in the fstab like this:
Code:
UUID=7405fd82-0913-4274-9622-267fb6166d52 /boot ext3 defaults 0 1

But it's allways a good idea to use the systemdisk in the first place, in other cases the grub-bootloader can be written to another disk - normaly no problem, but if you change that disk perhaps you don't have an boot-loader...

Udo
 
I understand, the devices where renamed because the boot order changed...
Never noticed that before.
Thanks, I learned a couple of things :)
 
I tested a little more and still don't quite get it, hope someone can explain a bit more.

The problem apparently appears when I attach a usb-drive to the server.
The drive is not in fstab because I usually mount it manually.
If I remove the USB drive the server boots normal, with the disk attached (again, not loaded in fstab) things go wrong with the above message.

fstab:
# <file system> <mount point> <type> <options> <dump> <pass>
/dev/pve/root / ext3 errors=remount-ro 0 1
/dev/pve/data /var/lib/vz ext3 defaults 0 1
/dev/sda1 /boot ext3 defaults 0 1
/dev/pve/swap none swap sw 0 0
proc /proc proc defaults 0 0

The part I don't understand:
If I press ctrl-D at the message (for continue) the system loads normally..(didn't try that before).
I can even enter the USB-drive at his mount point /mnt/usbdrive although it does not show up at df -h
Also why did the system boot while /boot could not be loaded?

Filesystem Size Used Avail Use% Mounted on
/dev/mapper/pve-root 38G 24G 13G 66% /
tmpfs 499M 0 499M 0% /lib/init/rw
udev 10M 572K 9.5M 6% /dev
tmpfs 499M 0 499M 0% /dev/shm
/dev/mapper/pve-data 105G 27G 79G 25% /var/lib/vz

Forgive me if my questions are stupid, I guess I oversee something obvious here, just trying to learn here ;)
 
Last edited:
I tested a little more and still don't quite get it, hope someone can explain a bit more.

The problem apparently appears when I attach a usb-drive to the server.
The drive is not in fstab because I usually mount it manually.
If I remove the USB drive the server boots normal, with the disk attached (again, not loaded in fstab) things go wrong with the above message.

fstab:


The part I don't understand:
If I press ctrl-D at the message (for continue) the system loads normally..(didn't try that before).
linux tried to mount all filesystems according to fstab - and fails at /boot because sda is in this case your usb-disk and not your boot-partition. <- better say: can't mount sda1 because on your first usb-disk-partiton is not an ext3-filesystem!
I can even enter the USB-drive at his mount point /mnt/usbdrive although it does not show up at df -h
Also why did the system boot while /boot could not be loaded?
because at this time /boot is not necessary - /boot is needed to read by grub (kernel and initrd), after switching th the right root (pve-root) /boot is only needed to do updates on the kernel/grub-config.
Forgive me if my questions are stupid, I guess I oversee something obvious here, just trying to learn here ;)
Use my hint with the uuid from a posting before - and all should run. But you have an strange Bios, that usb-disk has an higher prio than internal disks (or raid-volumes)...

Udo
 
Last edited:
Thanks Udo,

I experimented some more and indeed the UUID solves it all.
Thank you very much!