Hi.
We run our Proxmox hosts in ZFS RAID10 (on SSDs, not that that's especially relevant).
Typically, I install to a ZFS stripe from the Proxmox installer. This is nice and simple - however, afterwards, only one disk has an EFI BIOS partition (and thus is the only bootable disk - SPOF anyone?) - and the partitions are laid out differently (ZFS member/overflow partitions different start/end blocks).
This seems like an obvious improvement/feature request for the installer, but it's relatively simple to address manually.
I lay out a (currently unused) disk the way I want (e.g. with 3 partitions, first an EFI Boot partition #1, the ZFS member #2 and ZFS overflow #9). And then:
This has worked well for 18 months or so.
Today I updated a canary from the pve-no-subscription repo; because there were grub changes, I went to do a grub-install to all the disks.
I note that grub from the 2.02-pve5 packages gives the following error:
Targeting the EFI BIOS boot partition directly gives:
Reverting to grub-2.02-pve4 restores the expected behaviour/makes grub consistent with how it behaves elsewhere:
Would you like a bugzilla filed for this?
We run our Proxmox hosts in ZFS RAID10 (on SSDs, not that that's especially relevant).
Typically, I install to a ZFS stripe from the Proxmox installer. This is nice and simple - however, afterwards, only one disk has an EFI BIOS partition (and thus is the only bootable disk - SPOF anyone?) - and the partitions are laid out differently (ZFS member/overflow partitions different start/end blocks).
This seems like an obvious improvement/feature request for the installer, but it's relatively simple to address manually.
I lay out a (currently unused) disk the way I want (e.g. with 3 partitions, first an EFI Boot partition #1, the ZFS member #2 and ZFS overflow #9). And then:
- sgdisk clone the disk layout to any others, and regen the UUIDs
- "zpool replace" the original disk(s) with the newly-frobbed ones (actually, the ZFS member partition, rather than the raw disk) to get them into the pool, wait for resilver to complete
- "zfs labelclear" the (now) unused disks,
- sgdisk clone/regen again (target the newly unused disk),
- followed by "zpool attach" them - actually the zfs member partition, rather than the raw disk - back into the pool as mirrors.
This has worked well for 18 months or so.
Today I updated a canary from the pve-no-subscription repo; because there were grub changes, I went to do a grub-install to all the disks.
I note that grub from the 2.02-pve5 packages gives the following error:
Code:
root@k003:/home/andy# grub-install --target=i386-pc /dev/sda
Installing for i386-pc platform.
grub-install: warning: Attempting to install GRUB to a disk with multiple partition labels. This is not supported yet..
grub-install: error: filesystem `zfs' doesn't support blocklists.
Targeting the EFI BIOS boot partition directly gives:
Code:
root@k003:/home/andy# grub-install --target=i386-pc /dev/sda1
Installing for i386-pc platform.
grub-install: error: unable to identify a filesystem in hostdisk//dev/sda; safety check can't be performed.
Reverting to grub-2.02-pve4 restores the expected behaviour/makes grub consistent with how it behaves elsewhere:
Code:
root@k003:/home/andy# grub-install --target=i386-pc /dev/sda
Installing for i386-pc platform.
Installation finished. No error reported.
root@k003:/home/andy# grub-install --target=i386-pc /dev/sdb
Installing for i386-pc platform.
Installation finished. No error reported.
root@k003:/home/andy# grub-install --target=i386-pc /dev/sdc
Installing for i386-pc platform.
Installation finished. No error reported.
root@k003:/home/andy# grub-install --target=i386-pc /dev/sdd
Installing for i386-pc platform.
Installation finished. No error reported.
Would you like a bugzilla filed for this?