How to install proxmox with zfs1 mirror setup on ovh?

Oct 10, 2019
3
2
23
47
Hi everyone,

I've been using Proxmox on an OVH bare metal server with two SSDs.

I recently did some testing in my home lab and the most important feature for me is the proxmox-boot-tool which allowed my system to boot from the second drive when I intentionally corrupted the first drive's boot sector.
It worked very smartly and it was super easy recovering from everything using the docs.

However, the OVH Proxmox installer doesn't offer the default ZFS1 mirror setup. The custom installation wizard allows partition customization, but I don't know how it will play out. I tried using the Proxmox ISO installer via IPMI, but it's super slow, likely due to latency, and I don't want to keep the server down for an extended period.

Additionally, setting up the specifics of OVH manually, such as network and boot configurations, can be problematic; I would avoid it if possible.

Has anyone managed to do a clean Proxmox install with a ZFS1 mirror setup on OVH? If so, could you please share the steps or provide any guidance? I'm looking for a way to achieve the same setup I have in my home lab on my OVH server, whether through the default ZFS1 setup or by using the custom partitioning option.

Thanks in advance for your help!
 
  • Like
Reactions: luison
I am currently at exactly the same point with a new server on OVH. I was expecting OVHs template for PVE8 to be updated for a full ZFS support as official ISO but just noticed is still as usual with md raid + lvm.

I have so far tried this conf on installation with what seems a correct boot.
82d_jB_PLcZjOgtr-TlEnwyWLhz0KIBXHKvKSuKcorf8Gx-sc7UYkeQiTXua8kSHMadZxIkf69_P9f-h2THYQrZm3L_t_bbEmsURz3IYrVdGcqhTMeJ90nr6X7O9sD9fytZCbBb4vnGkir8iG-gL9uo


Had to do a zpool upgrade as reported by status, solved with:

Code:
zpool upgrade XXXXX

My concern is regarding the /boot partition and swap. If guess the first is ok on linux raid and swap has to be on a primary partition.
The result was this:

Code:
gdisk -l /dev/nvme0n1

Number  Start (sector)    End (sector)  Size       Code  Name
   1            2048         1048575   511.0 MiB   EF00  primary
   2         1048576         3145727   1024.0 MiB  FD00  primary
   3         3145728         5242879   1024.0 MiB  8300  primary
   4         5242880        85032959   38.0 GiB    8300  primary
   5        85032960        95272959   4.9 GiB     8300  primary
   6      1875380912      1875384974   2.0 MiB     8300  logical

proxmox-boot-tool status doesn't seem to be auto configured either:
Re-executing '/usr/sbin/proxmox-boot-tool' in new private mount namespace..
E: /etc/kernel/proxmox-boot-uuids does not exist.

So had to:
proxmox-boot-tool init /dev/nvme1n1p1
proxmox-boot-tool refresh

Now
cat /etc/kernel/proxmox-boot-uuids

Lists a second uefi drive for sync (B627-B1DC) which I understand is correct.


It would be great if anyone can assist or show alternative recommended setups for bare-metal PVE8 partitioning on OVH. I would rather have / (root) on ZFS (or LVM if not) so it can be snapshoted.

I am also a bit lost now with ZFS and fstab as I don't see it reflecting any entries there and not sure how that is handled now.
 
Last edited:
Start the remote console and install using remote media ISO as if it was a local server. Yes, it's slow. Yes, requires Java so the remote media doesn't disconnect from time to time. But currently it's the only way to reliably use zfs on OVH and with some luck you will only do it once, then update PVE as needed.
 
  • Like
Reactions: Kingneutron
Start the remote console and install using remote media ISO as if it was a local server. Yes, it's slow. Yes, requires Java so the remote media doesn't disconnect from time to time. But currently it's the only way to reliably use zfs on OVH and with some luck you will only do it once, then update PVE as needed.
Thanks. We tend to change server every two years, so I would rather control a quicker installation method also in the case we ever needed a quick server reinstall.

How does the ISO partition disks? I mean... is /boot and / (root) running under a zpool? swap?
 
Thanks. We tend to change server every two years, so I would rather control a quicker installation method also in the case we ever needed a quick server reinstall.
Okey, so once every two years :) I want full control on the install process and the package versions / customizations that OVH may apply, so I've never used their installer.

How does the ISO partition disks? I mean... is /boot and / (root) running under a zpool? swap?
Pick two drives, ZFS as mirror. By default, it will create a "rpool" with a dataset for "local" file storage (OS, configs, backups, etc) and a "local-zfs" storage for VM storage. PVE installer does not create swap, as swap on ZFS isn't a good idea [1]. Leave some unused space in each drive and configure swap manually after the installation:

Code:
zpool list -v
NAME                                                  SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
rpool                                                 928G   614G   314G        -         -    17%    66%  1.00x    ONLINE  -
  mirror-0                                            928G   614G   314G        -         -    17%  66.2%      -    ONLINE
    nvme-eui.e8238fa6bf530001001b444a49d29cfa-part3   931G      -      -        -         -      -      -      -    ONLINE
    nvme-eui.e8238fa6bf530001001b444a49a63036-part3   931G      -      -        -         -      -      -      -    ONLINE
    
zfs list
NAME                           USED  AVAIL  REFER  MOUNTPOINT
rpool                          614G   285G   104K  /rpool
rpool/ROOT                     229G   285G    96K  /rpool/ROOT
rpool/ROOT/pve-1               229G   285G   229G  /
rpool/data                     385G   285G    96K  /rpool/data

Practice installing PVE on a VM so you can define how do you wan't to install on your real server.

schemes partitions are described here :
https://pve.proxmox.com/wiki/Host_Bootloader

It doesn't detail how that "third partition spanning the set hdsize parameter or the remaining space used for the chosen storage type" is used. The install doc [2] has some more detail, but IMHO still requires some practice to get an idea on how the storage will look like.


[1] https://pve.proxmox.com/pve-docs/pve-admin-guide.html#zfs_swap
[2] https://pve.proxmox.com/pve-docs/pve-admin-guide.html#installation_installer
 
  • Like
Reactions: Kingneutron
was to describe the 512 MB boot partition created on each disks selected within pve installer, which hold the kernel and synchronised (not raid) by proxmox boot tool when there is a kernel update.
 
Thanks. I've tried the ISO install and had some issues with the network configuration.
Solved them by changing the bridge-ports of net config to eno1

But I am not sure how safe this is with OVHs rescue options.

The iso install created this:

Number Start (sector) End (sector) Size Code Name
1 34 2047 1007.0 KiB EF02
2 2048 2099199 1024.0 MiB EF00
3 2099200 1153433600 549.0 GiB BF01

root@m24:~# zfs list
NAME USED AVAIL REFER MOUNTPOINT
rpool 1.54G 529G 104K /rpool
rpool/ROOT 1.43G 529G 96K /rpool/ROOT
rpool/ROOT/pve-1 1.43G 529G 1.43G /
rpool/data 96K 529G 96K /rpool/data
rpool/var-lib-vz 120M 529G 120M /var/lib/vz


Which I understand mean that everything is under one large ZFS pool

1716229811560.png

Still have to reboot and test rescue mode to see if I can access everything.

What I still don't get to understand, likely because of my ignorance of ZFS, is if that local-zfs is my old lvm thinpool and not just the "local" var/lib/vz directory. I mean currently all our containers and VMs run from a thin provisioned disk and just want to be sure this remains. I also don't understand why rpool/ROOT and rpool/ROOT/pve-1 are created, but I guess that is irrelevant.


I understand one of the advantages of ZFS is the option to share the rpool with all the underlaying datasets so if that is the case my intention is to have:

rpool
rpool/ROOT --> root
rpool/data --> CT thin provisiones disks
rpool/var-lib-vz --> just local templates and isos

I am then considering creating a striped (raid0) pool to host all non critical files for containers and that will not require backup from PVE or snapshots. This would be bind mounted to each container and in the case of KVMs still need to play with virtio9p.

quick
quick/cache
quick/tmp
quick/logs

Each of those would have a CTnumber directory within that will be bind mounted (quick/cache/100/), but wondering if there would be any advantage of using additional datasets

I would still have to decide regarding backups if using standard ext4 partitions (for recovery benefits) or if as I understand ZFS pools can also be used to create mirrored ones... this is, wether to create a pool-disk1 and pool-disk2 which might hold some datasets (partitions) and then create quick by striping them two.
 
Ok just testing the rescue console in OVH I find another reason to perhaps avoid ISO installation or at least maintaining /root in ZFS...
Learning my way around ZFS I understand I would need to import my pools and mount them on my rescue console to access those files in case of a major issue that left the server unstable.
Unfortunately, OVH seems to maintain an old Debian 10 rescue disk and I get the error:

root@rescue-customer-eu (xxxxxxxxxx) ~ # zpool import -f rpool -R /mnt/root-test
This pool uses the following feature(s) not supported by this system:
com.klarasystems:vdev_zaps_v2
cannot import 'rpool': unsupported version or feature

As far as I understand these modules need to be kernel supported and features of pools can not be downgraded, so I guess this means I would never be able to access my files
 
Last edited:
I'm not familiar with OVH but you should open a support ticket with them and have the ZFS version upgraded to at least 2.2.3 on the rescue disk; that should solve it

The version they have on there is older than what the pool was created with

If you can mount the proxmox installer ISO as a rescue disk that should give you the correct version as well.
 
  • Like
Reactions: luison
I'm not familiar with OVH but you should open a support ticket with them and have the ZFS version upgraded to at least 2.2.3 on the rescue disk; that should solve it

The version they have on there is older than what the pool was created with

If you can mount the proxmox installer ISO as a rescue disk that should give you the correct version as well.
Yes, done that already, but I'm assuming they will not take any notice (is OVH!)
Tried the ISO rescue option too, but did not seem to take me to a rescue console and tried to boot directly. I guess I will have to try with a rescue ISO that supports... ZFS, LVM and Linux Raid in our case to test!

UPDATE
------------
Tried with finnix vo 125 similar result: "pool is formatted with a newer version of ZSF"
 
Last edited:
Tried the ISO rescue option too, but did not seem to take me to a rescue console and tried to boot directly
That's the expected behavior: it just looks for an installed PVE and boots from it (useful if only the bootloader got damaged).
If you need a shell, start the debug installer and type exit to drop to a shell. Then:

Code:
/sbin/modprobe zfs
mkdir /zfspool01
zpool import -f -R  /zfspool01 rpool (or whichever pool has the data you need)

Access the data you need. When done:

Code:
cd /
zpool export rpool (or whichever pool has the data you need)
reboot

Ubuntu 24.04 seems to have ZFS v2.2 [1], it may be able to import PVE8 ZFS pool's (haven't tested yet).

[1] https://packages.ubuntu.com/search?keywords=zfs&searchon=names&suite=noble&section=all
 
Last edited:
  • Like
Reactions: luison
Hi,

I've tried to install Proxmox 8 + ZFS mirror using the ISO file from the OVH IPMI console and the installation went fine but I got this error after reboot (see attached screenshot): No root device specified. Boot arguments must include a root= parameter.

Any idea?
 

Attachments

  • Screenshot from 2024-09-06 15-18-35.png
    Screenshot from 2024-09-06 15-18-35.png
    166.3 KB · Views: 7
Check if that OVH server is booting using UEFI or not. You have to use the proper boot type: if using UEFI, you have too boon using the boot option UEFI CDROM instead of just CDROM. Sorry, can't check the exact names right now.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!