Tutorial - Helping post for upgrade from Promox 5 to Proxmox 6 on OVH.

Thomas P.

Member
Sep 6, 2018
14
9
23
49
Hello,

As i got some difficulty to make the upgrade from Proxmox 5 to 6 on a dedicated OVH server (using the standard OVH Proxmox v5 template), i post here how i was able to make it work for people having same problems.

All the upgrade went fine. I rebooted the server, but no more GUI, SSH... server was blocked. IMPI was not working, so i rebooted in rescue mode using "rescue64-pro".

In the rescue mode OHV, mount the filesystems:

zpool import -R /mnt rpool
mount -t proc /proc /mnt/proc
mount -t sysfs /sys /mnt/sys
mount --bind /dev /mnt/dev
mount --bind /run /mnt/run
mount --bind /sys /mnt/sys
mount --bind /etc/resolv.conf /mnt/etc/resolv.conf
modprobe efivars
chroot /mnt


And edit ZFS config:
nano /etc/default/zfs
to set ZFS_INITRD_PRE_MOUNTROOT_SLEEP at value 4 (i.e.: ZFS_INITRD_PRE_MOUNTROOT_SLEEP='4')
(taken from here: https://pve.proxmox.com/wiki/ZFS:_Tips_and_Tricks#Boot_fails_and_goes_into_busybox)
(in grub rootdelay was already set to 15, so i had no need to change it)

Get list of kernels:
dpkg --list|grep pve-kernel
In my case the last one was "pve-kernel-5.0.21-5-pve"
Run (replace kernel version by yours):
update-initramfs -k 5.0.21-5-pve -u
You can get an "No /etc/kernel/pve-efiboot-uuids found, skipping ESP sync." message, i think this is normal, but i opened antoher post on it: https://forum.proxmox.com/threads/u...-do-we-must-do-a-pve-efiboot-tool-init.60475/)

Unmount all filesystems:

umount /mnt/proc
umount /mnt/sys
umount /mnt/dev
umount /mnt/run
umount /mnt/sys
umount /mnt/etc/resolv.conf
zpool export rpool


On OVH manager, set boot to HD normal, and reboot the server.

Now it was booting great on Proxmox.

Hope it helps.
 
Thank you for sharing your experience!

If you want to make this thread more visible, you can set the prefix of the thread to "Tutorial" by editing your post. It will then carry a symbol in the thread overview. Additionally, filtering for prefixes is possible.
 
I came across the exact problem when I was upgrading Proxmox from 5 to 6 on an OVH SoYouStart server. I can confirm the solution in the tutorial works like a charm.

Thank you Thomas for sharing this useful tutorial!
 
Hello,

I have a problem using this method on a SoYouStart E5-SAT-2-64 server. I installed the SYS Proxmox 5 ZFS template, upgraded to Proxmox 6 using official Proxmox guide step by step.

When I followed the process of Thomas P., I can't export my rpool using the last command :

Code:
root@rescue:/# zpool export rpool
cannot export 'rpool': pool is busy

Same result using the -f argument

Is this GRUB line normal? (by default in my grub file, rootdelay appears twice, this makes me doubt)

GRUB_CMDLINE_LINUX="root=ZFS=rpool/ROOT/pve-1 boot=zfs rootdelay=10 vga=normal nomodeset rootdelay=15"
 
When I followed the process of Thomas P., I can't export my rpool using the last command :

Code:
root@rescue:/# zpool export rpool
cannot export 'rpool': pool is busy

Same result using the -f argument

Maybe because you're still in chroot?

Is this GRUB line normal? (by default in my grub file, rootdelay appears twice, this makes me doubt)

GRUB_CMDLINE_LINUX="root=ZFS=rpool/ROOT/pve-1 boot=zfs rootdelay=10 vga=normal nomodeset rootdelay=15"

My setup has two entries of rootdelay too. Doesn't cause any problem.
 
  • Like
Reactions: Dominic
Thanks for this TIPs, as COVID19 gives me some free time, and I have a Proxmox available, I done this test

I provide here my way to upgrade, in doing it BEFORE upgrade
  1. Edit ZFS config :
    nano /etc/default/zfs
    to set ZFS_INITRD_PRE_MOUNTROOT_SLEEP at value from 0 to 4 (i.e.: ZFS_INITRD_PRE_MOUNTROOT_SLEEP='4')
    I set mine to 5
  2. Upgrade initramfs for all kernel
    update-initramfs -u -k all
  3. You can get an "No /etc/kernel/pve-efiboot-uuids found, skipping ESP sync." message, i think this is normal, but i opened antoher post on it: https://forum.proxmox.com/threads/u...-do-we-must-do-a-pve-efiboot-tool-init.60475/)

Now reboot your system once, then proceed to upgrade from v5 to v6 as usual
https://pve.proxmox.com/wiki/Upgrade_from_5.x_to_6.0

as Matteli asked below ( https://forum.proxmox.com/threads/t...romox-5-to-proxmox-6-on-ovh.60476/post-307321 ) when doing the upgrade from 5 to 6, system may ask if you wanna replace the /etc/default/zfs file by maintainer one.
Of course, choose to keep yours, to keep the delay to 4 (or 5).

As per today I haven't seen any other difference between both file but the delay we modified in step 1

Regards.
Johann
 
Last edited:
  • Like
Reactions: dik23 and Dominic
Thanks for this TIPs, as COVID19 gives me some free time, and I have a Proxmox available, I done this test

I provide here my way to upgrade, in doing it BEFORE upgrade
  1. Edit ZFS config :
    nano /etc/default/zfs
    to set ZFS_INITRD_PRE_MOUNTROOT_SLEEP at value from 0 to 4 (i.e.: ZFS_INITRD_PRE_MOUNTROOT_SLEEP='4')
    I set mine to 5
  2. Upgrade initramfs for all kernel
    update-initramfs -u -k all
  3. You can get an "No /etc/kernel/pve-efiboot-uuids found, skipping ESP sync." message, i think this is normal, but i opened antoher post on it: https://forum.proxmox.com/threads/u...-do-we-must-do-a-pve-efiboot-tool-init.60475/)


Now reboot your system once, then proceed to upgrade from v5 to v6 as usual
https://pve.proxmox.com/wiki/Upgrade_from_5.x_to_6.0

Regards.
Johann
Just one thing. When you finally upgrade, don't install the package maintainer's version of the /etc/default/zfs file because you cancel the ZFS config.
 
  • Like
Reactions: Neox
Just one thing. When you finally upgrade, don't install the package maintainer's version of the /etc/default/zfs file because you cancel the ZFS config.

Of course not, I keep my modified version as I only see this line has been modified.
 
Thanks for sharing.
I just noticed that SoYouStart was now providing only a non-ZFS Proxmox 5 template. Does anyone has informations about that? will OVH abandon Proxmox?
 
Thanks for sharing.
I just noticed that SoYouStart was now providing only a non-ZFS Proxmox 5 template. Does anyone has informations about that? will OVH abandon Proxmox?

When I asked them earlier this year, they told me that hold on Proxmox v5 with ZFS due to some UEFI boot server issue.
But maybe they didn't know this "timeout" post.
 
A point for improvement

Before unmounting all filesystems: we must exit chroot mode.

Just type "exit" in the shell
 
  • Like
Reactions: sannsio
When I asked them earlier this year, they told me that hold on Proxmox v5 with ZFS due to some UEFI boot server issue.
But maybe they didn't know this "timeout" post.
Yes I had a similar answer: "hardware compatibility problem" which kind of worried me because it felt like they don't know what they are talking about.
But yesterday I had a slightly different answer: "you can select a premium OVH offer", so I wonder if they are using this as an excuse to make us pay more.
 
I just noticed that SoYouStart was now providing only a non-ZFS Proxmox 5 template. Does anyone has informations about that? will OVH abandon Proxmox?
When I asked them earlier this year, they told me that hold on Proxmox v5 with ZFS due to some UEFI boot server issue.

Generally Proxmox VE 6 should not have problems with root ZFS on UEFI anymore because of systemd-boot.
There is a section about ZFS root file system options in the reference documentation. It shortly mentions the host bootloader section which might be worth a read in your case.

You can quickly give this a try virtualized: Create a VM with UEFI option. Then there are two possible outcomes:
  • install Proxmox VE 5 iso image on ZFS root => Does not boot after installation
  • install Proxmox VE 6 iso image on ZFS root => Does boot after installation

It is also really recommended to upgrade to Proxmox VE 6 as version 5 reaches end of life in July.

I just updated an outdated wiki article. Please report if you see something official looking that states something different.
 
Last edited:
My personal opinion is that it's not at all a technical issue: SoYouStart is the "low cost" offer and I suspect OVH wants us to migrate to a more expensive offer, even if the physical machine are the same. The tech support told me that Proxmox 6 is already available in OVH but not in SYS.
 
Hello,

Just to add some information for those migrating on OVH from Proxmox 6 to 7.
I just switched to Proxmox 7 on OVH, and want to add two tricks here on this tutorial.

1-
I kept adding the : ZFS_INITRD_PRE_MOUNTROOT_SLEEP='4'
When dist-upgrading to v7, grub config is asked to overwrite, you can select yes (so you have up-to-date config), but after dist-upgrade, add this line at the end of /etc/default/zfs before rebooting.

2-
I got network problem as described in the upgrade process (section "Check Linux Network Bridge MAC").
No more network so I have to start a remote KVM console to get back the shell command line.
I added to /etc/network/interfaces, to vmbr0, the "hwaddress xx:xx:xx:xx:xx:xx" line, where xx:xx:xx:xx:xx:xx must be the same than the MAC of eth0.
After this doing a "ifdown vmbr0; ifup vmbr0;" network is back ; ) Or you can reboot as well.
(i tried to install ifupdown2, but after install, doing an apt update bring an error saying it is not completely installed and no way to install it completely so I decided to keep ifupdown).

Now Proxmox 7 is working fine.

Hope it helps too other people migrating from Proxmox 6 to Proxmox 7 on a dedicated OVH server.

For adding help, you can check this very good post (written in french by someone else) which give many advice on the process, so I put the link here : https://blog.zwindler.fr/2021/07/19/les-soucis-que-jai-rencontre-en-upgradant-proxmox-ve-7/

Best regards.
 
  • Like
Reactions: hanru
Thanks Thomas P.
Went the difficult way to upgrade from pve6 to pve7 on OVH as upgrade has also added net.ifname=0

1°) new machine so default pve6 as ovh install

2°) login as root and follow the white rabit (aka pve6 to pve7 wiki)

=> failed at reboot

in fact upgrade added
GRUB_CMDLINE_LINUX_DEFAULT="nosplash text biosdevname=0 net.ifnames=0 console=tty0 console=ttyS0,115200 earlyprintk=ttyS0,115200 consoleblank=0 systemd.show_status=true"

that mess up network card name or something around that
I had to add the hwaddress as requested by wiki but also change the grub cmdline as followed (removed net.ifnames=0)

GRUB_CMDLINE_LINUX_DEFAULT="nosplash text biosdevname=0 console=tty0 console=ttyS0,115200 earlyprintk=ttyS0,115200 consoleblank=0 systemd.show_status=true"

ask for upgrade-grub

of course, if using ZFS, don't forget the timeout as described before
 
Thanks Thomas P.
Went the difficult way to upgrade from pve6 to pve7 on OVH as upgrade has also added net.ifname=0

1°) new machine so default pve6 as ovh install

2°) login as root and follow the white rabit (aka pve6 to pve7 wiki)

=> failed at reboot

in fact upgrade added
GRUB_CMDLINE_LINUX_DEFAULT="nosplash text biosdevname=0 net.ifnames=0 console=tty0 console=ttyS0,115200 earlyprintk=ttyS0,115200 consoleblank=0 systemd.show_status=true"

that mess up network card name or something around that
I had to add the hwaddress as requested by wiki but also change the grub cmdline as followed (removed net.ifnames=0)

GRUB_CMDLINE_LINUX_DEFAULT="nosplash text biosdevname=0 console=tty0 console=ttyS0,115200 earlyprintk=ttyS0,115200 consoleblank=0 systemd.show_status=true"

ask for upgrade-grub

of course, if using ZFS, don't forget the timeout as described before
This work for me on OVH server
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!