I currently boot Proxmox via GRUB using LVM on a single SSD:
(Note: I have modified the default install to add LUKS encryption)
I recently bought two new NVMe drives which I would like to run as a ZFS mirror and use that as the root pool for the Proxmox OS (eventually removing the old 256G SSD that currently hosts the OS (/dev/sdc above).
I've been looking around a bit and came across Debian Bookworm Root on ZFS which seems like an excellent guide full of detailed instructions. I noticed that they create two pools, the
Searching didn't turn up any explanation for one way over the other: If one disk fails entirely, no amount of ZFS is going to enable it to boot as the UEFI firmware first needs to find a single drive's EFI partition which then has a bootloader that speaks ZFS. But at that point it may as well just look at the initrd within its own partition.
My question is then: Why use a boot pool at all? So that you don't have to sync the initrd & co to two partitions? And for doing this on Proxmox specifically: Should I thus avoid the boot pool part of the guide and instead align to how Proxmox sets things up with
Bash:
sdc 8:32 0 232.9G 0 disk
├─sdc1 8:33 0 1007K 0 part
├─sdc2 8:34 0 512M 0 part /boot/efi
├─sdc3 8:35 0 500M 0 part /boot
└─sdc4 8:36 0 231.9G 0 part
└─cryptlvm 252:0 0 231.9G 0 crypt
├─pve-root 252:1 0 30G 0 lvm /
├─pve-swap 252:2 0 8G 0 lvm [SWAP]
├─pve-data_tmeta 252:3 0 100M 0 lvm
│ └─pve-data-tpool 252:5 0 100G 0 lvm
│ ├─pve-data 252:6 0 100G 1 lvm
│ ├─pve-vm--104--disk--0 252:7 0 8G 0 lvm
│ ├─pve-vm--303--disk--0 252:8 0 32G 0 lvm
│ └─pve-vm--304--disk--0 252:9 0 32G 0 lvm
└─pve-data_tdata 252:4 0 100G 0 lvm
└─pve-data-tpool 252:5 0 100G 0 lvm
├─pve-data 252:6 0 100G 1 lvm
├─pve-vm--104--disk--0 252:7 0 8G 0 lvm
├─pve-vm--303--disk--0 252:8 0 32G 0 lvm
└─pve-vm--304--disk--0 252:9 0 32G 0 lvm
(Note: I have modified the default install to add LUKS encryption)
I recently bought two new NVMe drives which I would like to run as a ZFS mirror and use that as the root pool for the Proxmox OS (eventually removing the old 256G SSD that currently hosts the OS (/dev/sdc above).
I've been looking around a bit and came across Debian Bookworm Root on ZFS which seems like an excellent guide full of detailed instructions. I noticed that they create two pools, the
rpool
(for root and all other datasets, equivalent to the pve
LVM VG) and bpool
(for /boot
to hold the initrd & co). By contrast reading the Proxmox wiki in ZFS on Linux and Host Bootloader describes using proxmox-boot-tool
to keep the ESPs in sync.Searching didn't turn up any explanation for one way over the other: If one disk fails entirely, no amount of ZFS is going to enable it to boot as the UEFI firmware first needs to find a single drive's EFI partition which then has a bootloader that speaks ZFS. But at that point it may as well just look at the initrd within its own partition.
My question is then: Why use a boot pool at all? So that you don't have to sync the initrd & co to two partitions? And for doing this on Proxmox specifically: Should I thus avoid the boot pool part of the guide and instead align to how Proxmox sets things up with
proxmox-boot-tool
? If anyone has a bit of insight on the pros & cons of the two approaches I'd be keen to learn more. I'm sure I can make both of them work, but would like to understand the tradeoffs beforehand.
Last edited: