Hey,
no, the key for the ZFS is then located on the RAID1 on the OS partition which must be decrypted during startup e.g. with dropbear works fine!
Good.
UEFI/EFI
Yes, I have exactly the same two questions. Firstly, how do I get the EFI partition onto the second hard disk? It is there but not mounted. Do I have to do anything else? The server definitely has a UEFI.
Yeah, I intentionally left this out as I do not have a way to test it (not to my liking as in giving out reliable advice) because even if I ran this in some virtualized environment the way the UEFI emulation behaves will make it different for me. I can't give away a copy/paste for you, but if you like experimenting while that system is not production, please do and give feedback (for others also who find this later).
I think you understand that you can't really RAID that UEFI because the
mdadm
is something your firmware has no idea about. There was a version of superblock that put it at the end of the partition which would make it look like a normal FAT partition for the firmware, but I do not see much point going down that route if the grub
updates do not pick it up automatically in Debian. Quick look at Debian own notes on this - see below [1].So here's the thing. This is why I very originally mentioned it is really nice to be able to have the EFI and boot on something else, like mirrored internal SLC SD cards for the hypervisor used to be on many systems. This is what transparent hardware RAID is good for, unfortunately it's also not good for anything else when you can't really move it elsewhere transparently to even just recover data.
You can look at where you got the EFI put by looking at where it's mounted, when you look at
lsblk
, the entry that has mountpoint in /boot/efi
- the other is not mounted and basically empty partition. You can dd
the content or you can mkfs.fat
on it, mount it separately and copy, which is actually what the suggested hook by Debian folks means to do for you too in [1].For that to work you of course have to update your
/etc/fstab
. If you want to see UUIDs you can just run blkid
. I am not going to lie, I really prefer PARTLABELs for this. You can even uses filesystems' LABELs. Have a look at the format man 5 fstab
. PARTLABEL is what we gave it in the GPT partition table (sgdisk -c
switch), they do not get destroyed if you are changing filesystems within, LABEL is what a filesystem holds (if there is one). The neat lsblk -o +PARTLABEL, LABEL, FSTYPE
gives you better idea.So with that
grub
hook, everytime you run update-grub
it will copy the "primary" onto the "secondary" and you can forget about it. There's an issue I can envisage already though. Suppose your "primary" drive fails and you put in a new one, recover that RAID for the rest and then run your hookish update-grub
but something goes wrong, well they have the rsync --recursive --delete
there, so it will trash your good "secondary" EFI too. Same if you accidentally boot the "secondary", you may want to have a check there (by the UUID, perhaps) in the hook - - not irrecoverable, but annoying. In any case, you should be able to do grub-install --efi-directory=/boot/efi
.Now the hook is just copying the content, but the question is whether your EFI gets to know it can boot either of the two (or you will have to manually switch over, or maybe you prefer to - just so you are sure you never accidentally booted the "secondary", really your call) - so you may want to see what
efibootmgr
says and tweak it so that both boot options are there (or you manually add it yourself and check if it survives). If in doubt, just man 8 efibootmgr
. Another thing worth trying might be
dpkg-reconfigure grub-efi-amd64
- if it picks up both mounted ESP partitions that would be even better, but that would be like full-service.[1] https://wiki.debian.org/UEFI#RAID_for_the_EFI_System_Partition
And how would you do it with the ZFS on the Luks? In order for the ZFS to be loaded in Proxmox, the two Luks partitions must be decrypted with cryptsetup luksOpen when restarting.
Code:sudo cryptsetup luksOpen nvme1n1p4 block1
How else would you mount/open the two partitions on restart/boot so that ZFS works?
Well, this is what
/etc/crypttab
is for, here the best again would be have a look what you got there now and improvise, but a word of warning, when you look at man 5 crypttab
it shows you what Debian supports and that's what you get during the initramfs stage. Subsequently, systemd
takes over and it interprets some of the entries in its own way or not at all. Most notably, e.g. keyscript
option cannot be used.So you can totally have something like
os UUID=xxx none luks,discard
and that will run just fine, but for initramfs you can even have os UUID=xxx none discard,luks,keyscript=/boot/luks/myspecialthing.sh
. This will however not work with systemd
later on, but you can have a line like zpool-mirrorA-member0 UUID=xxx /somewhere/safe/passphrase luks
. You can check the format details in [2], but note that you can also use the initramfs stage by adding initramfs
option described in the Debian version of the same.Now one more thing, as for
systemd
version goes, the "The second field contains a path to the underlying block device or file, or ..." - this however means that even though you cannot use e.g. Debian's PARTLABEL=...
, but you absolutely can use /dev/disk/by-...
, see also below.[2] https://man7.org/linux/man-pages/man5/crypttab.5.html
Can I do the cryptsetup luksOpen without specifying the "mount point"?
Like this?
Code:sudo cryptsetup luksOpen nvme1n1p4
There's no mountpoint, there's the name you want to give to the block device. Check
man cryptsetup-open
, the format is cryptsetup open --type <device_type> [<options>] <device> <name>
, but maybe I failed to guess why you were asking this way.Two more questions:
- Can I run the cryptsetup luksOpen (on boot) as described above using UUIDs and without specifying a mount point? Like this:
sudo cryptsetup luksOpen UUID1
Code:sudo cryptsetup luksOpen UUID2
You can give
<device>
(format above) as /dev/disk/by-uuid/..
or partuuid
, but you can also do /dev/disk/by-partlabel/..
. Did I say I liked PARTLABELs? - Should I specify the UUIDs of the two partitions when creating the ZFS pool? (If so, how do I get them out?) Like this:
Code:pool create -f -o ashift=12 data mirror UUID1 UUID2
You could, but I am normally fine with the names I gave to the devices when LUKS opening them.
You realise the UUID is different to what's inside than to what's outside the LUKS, yeah?
Last edited: