Recommended way to install proxmox on ZFS with encryption with autodecryption

doman18

Active Member
Oct 20, 2018
26
1
43
38
Currenly i have one 120GB SSD for root@LVM + separate encrypted /var partition which also contain key to unlock my second drive for VMs. VAR is encrypted with LUKS2 and yubikey. When i reboot machine i just blindly stick my yubikey, wait until it starts to blink, touch it by finger and it continues to boot. This works well, no keyboard and screen needed.

But 120GB disk starts to die. I need to replace it. But this time i would prefer to use some mirror raid - to be able to easily swap drive in case of similar situation in future.

The easiest way would be to just take some hw raid controller and install proxmox on top of it. But i don't have PCI-E on this server so i need to use software raid -> ZFS is the only viable candidate i think? AFAIK proxmox installer does not have encryption options. So i need to install Debian first. I assume that i'll create separate /var dataset with encryption. But i want to use autodecryption - yubikey, key on pendrive, clevis&clang - which of those will be most strightforward and stable (ive read somewhere that clevis can break during updates?)

Yeah ... welcome to my "rabbit hole" ... :)
 
Last edited:
@fireon I know those, i know the basics. But i asked for some more specific informations like:

1. In the past some people installed Debian on encrypted zfs (encrypted /) and then installed proxmox on to of it. Is it still good method to do so?

2. If above is not good it means that only some system folders should be mounted to encrypted datasets. Which ones? /VAR for sure, /etc/pve/priv probably as well. What else?

3. What is recommended way to autodecrypt such datasets on boot without having to provide password? AFAIK yubikey HMAC doesn't work with native zfs encryption (and it's "string-write-on-touch" method is quite wonky and not recommended). In the past i was using systemd service to aoutmount nfs share with key - this can be easily changed to mount pendrive and it will work with VMs storage. But will it work with system things like /var? I think systemd is not started yet when / is mounted.
Also i've seen using/modyfing /usr/share/initramfs-toos/scripts/zfs script to mount such pendrive. I mean there are some ways to do it but which one is recommended nowadays?

https://www.youtube.com/watch?v=7xOLxCwdi-I

Code:
root@proxmox:~# cat /etc/systemd/system/zfs-load-key.service
[Unit]
Description=Load encryption keys
DefaultDependencies=no
Before=pve-storage.target pvesr.service
After=network-online.target

[Service]
Type=oneshot
RemainAfterExit=yes
ExecStart=/usr/bin/bash -c '/root/unlockzfs'

[Install]
WantedBy=network-online.target


root@proxmox:~# cat /root/unlockzfs

#!/bin/bash
/usr/bin/mount /mnt/secret
/usr/sbin/zfs load-key -a
if [ $(mount | grep /zfsraid0/encrypted) ]; then
    echo "Zfs already mounted"
    exit 0
else
    pvesm remove encrypted
    zfs mount zfsraid0/encrypted
    pvesm add dir encrypted --path /zfsraid0/encrypted --content images,rootdir,vztmpl,iso,snippets --is_mountpoint yes
    exit 0
fi
echo "something went wrong"
exit 1
 
Last edited:
The easiest way would be to just take some hw raid controller and install proxmox on top of it. But i don't have PCI-E on this server so i need to use software raid -> ZFS is the only viable candidate i think?

mdadm? btrfs? bcachefs?

1. In the past some people installed Debian on encrypted zfs (encrypted /) and then installed proxmox on to of it. Is it still good method to do so?

Have been doing this on LUKS FDE instead, no issues whatsover, but not using ZFS for the OS drive to keep my sanity. There's nothing ZFS encryption provides that LUKS does not and it's not mature [1]. Your replication will not work that way.

2. If above is not good it means that only some system folders should be mounted to encrypted datasets. Which ones? /VAR for sure, /etc/pve/priv probably as well. What else?

Either you encrypt everything or you may as well leave it unencrypted. If you are for performance, you would want to look for SED drives. If you are paranoid, LUKS.

Edit: [1] https://github.com/openzfs/openzfs-docs/issues/494
 
Last edited:
I am a big fan of ZFS. And yes, of course you can install a Debian with LUKS and Proxmox on top.
I generally don't use keys to unlock ZFS, as the data is readable if the cluster is stolen. And yes, I have not yet got yubikey to work with it.

/VAR for sure, /etc/pve/priv probably as well. What else?
The backup is double-encrypted and PRIV is automatically wiped in the event of an unplanned shutdown or start of the hosts.
 
3. What is recommended way to autodecrypt such datasets on boot without having to provide password? AFAIK yubikey HMAC doesn't work with native zfs encryption (and it's "string-write-on-touch" method is quite wonky and not recommended). In the past i was using systemd service to aoutmount nfs share with key - this can be easily changed to mount pendrive and it will work with VMs storage. But will it work with system things like /var? I think systemd is not started yet when / is mounted.
Also i've seen using/modyfing /usr/share/initramfs-toos/scripts/zfs script to mount such pendrive. I mean there are some ways to do it but which one is recommended nowadays?

You don't really have to "modify" anything much have the keys read off a USB drive (if that's all you need) - you just refer to them from your crypttab and rebuild initramfs. If you want to have it off-server then either the tang/clevis you had mentioned or e.g. dropbear-initramfs.
 
@esi_y
mdadm? btrfs? bcachefs?
Ive seen many "NO NO" posts about mdadm. Btrfs? Ive heard about it but don't know much about its encryption or mirroring capabilities
Have been doing this on LUKS FDE instead, no issues whatsover, but not using ZFS for the OS drive to keep my sanity. There's nothing ZFS encryption provides that LUKS does not and it's not mature [1]. Your replication will not work that way.
Well im looking at zfs primaly as a tool for mirroring drives which will allow me to avoid all those concerns in the future. ZFS encryption is just a bonus for me. Yet i don't know if LUKS on the TOP of ZFS pool is viable option, haven't seen anyone who uses this that way.
Either you encrypt everything or you may as well leave it unencrypted. If you are for performance, you would want to look for SED drives. If you are paranoid, LUKS.

Edit: [1] https://github.com/openzfs/openzfs-docs/issues/494
So all or nothing? Hmm
What is the benefit of yubikey to e.g. having keys stored on a USB drive for a server? The capacitive touch?
None i believe. The only advantage is that you cant copy youbikey. Other than that i don't car which one i use as long as it will allow me to decrypt proxmox data on boot. If both can do it then i'll just choose easier solution
/etc/pve is a mounted filesystem that has its content actually stored in sqlite /var/lib/pve-cluster/config.db* file(s).
Oh, i see. So /var is enough to encrypt right?
 
You don't really have to "modify" anything much have the keys read off a USB drive (if that's all you need) - you just refer to them from your crypttab and rebuild initramfs. If you want to have it off-server then either the tang/clevis you had mentioned or e.g. dropbear-initramfs.
Wait. USB drive has to be mounted before you can read the content (key). And initram does not autmount all connected drives or does it?
 
If the key is not on a USB-drive (directly on the server), the data can be read by anyone who steals the cluster. Personally, I prefer a password.

What is the benefit of yubikey to e.g. having keys stored on a USB drive for a server? The capacitive touch?
These are available for additional security with a biometric check.
 
Wait. USB drive has to be mounted before you can read the content (key). And initram does not autmount all connected drives or does it?

This is going to become a bit of a convoluted thread if you want to explore all options, but one way to keep it very simple is to have the whole /boot be on the USB. While it may sound not attractive, consider that it takes away the not-so-nice EFI partition requirement to be on the same drive. So you basically have EFI & boot on a USB and so you have there your keys. The thing with crypttab is that there's this WONTFIX bug:
https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=618862

It's a bit funny to fiddle with the initramfs to get it to work like you actually want, but it is possible. If you do not want to be dealing with it, just keep the /boot on the USB and it works out-of-the-box.

You have to define your threat vectors, for most this is either a standard compliance or more practically - if you need to RMA a failed drive, you do not want worry about what data was laying around there if you can't delete them anymore.

Ive seen many "NO NO" posts about mdadm.

I am not sure what you are referring to, you may want to drop in some links. The only thing in relation to PVE I am aware of is:
https://bugzilla.proxmox.com/show_bug.cgi?id=5235

This is not mdadm's fault and it's not a bug (that the defaults do not cater for mdadm).

Btrfs? Ive heard about it but don't know much about its encryption or mirroring capabilities

BTRFS got some bad publicity with certain RAID (NOT mirror) setups early on or with features like quotas. The filesystem (unlike ZFS) is however not 20+ years old and was not build for primarily spinning drives. I personally only use ZFS for data storage. The ZVOLs are particularly tricky and buggy in my experience. BTRFS does not have "native" encryption, I would prefer the LUKS now in any case. Consider the native encryption of ZFS is iffy at best still, most importantly it does NOT encrypt metadata.

Well im looking at zfs primaly as a tool for mirroring drives which will allow me to avoid all those concerns in the future.

Do you even need a copy-on-write filesystem?

Yet i don't know if LUKS on the TOP of ZFS pool is viable option, haven't seen anyone who uses this that way.

I am not sure what you mean, LUKS under any filesystem or volume manager is a very standard setup. You may look at some older threads and get the idea, e.g.:

https://forum.proxmox.com/threads/proxmox-8-luks-encryption-question.137150/page-2#post-611562

The only advantage is that you cant copy youbikey.

Yet if you can carry away the server you carry it away with that very yubikey anyhow.

Oh, i see. So /var is enough to encrypt right?

I really do not worry about these on filesystem level. If you do not worry about your system messing it up, you can even have LUKS on raw drive without GPT to begin with.
 
These are available for additional security with a biometric check.

This is a bit off topic and does not concern server scenario as much, but nobody from the yubikey faction is ever worried that they have to allow USB ports to be accepting all sorts of things to make use of their yubikey? There are charging cables nowadays with RubberDucky-like capabilities. The "O.MG cable" for instance.

So security anything that involves USB device getting plugged in is a bit of a paradox.
 
This is going to become a bit of a convoluted thread if you want to explore all options, but one way to keep it very simple is to have the whole /boot be on the USB. While it may sound not attractive, consider that it takes away the not-so-nice EFI partition requirement to be on the same drive. So you basically have EFI & boot on a USB and so you have there your keys. The thing with crypttab is that there's this WONTFIX bug:
https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=618862
Heh my laptop with Kubuntu is set this way. I have two pendrives with boot partition. One is attached to my keychain and other is hidden. But i don't remember if proxmox installer provides the ability to manually set mounting points (as *buntu and other graphical installers do). Thats why i didn't even consider it. But if that's possible then yeah, this is simplest way to go.

And as for this bug, im not sure if i understand it correctly but i always encrypt with password only one partition. The rest of drives or partitions are encrypted with keys stored on this "first" partition. I've been always doing this this way so i didn't even know that there are other ways to deal with "multiple-password-prompts-for-multiple-encrypted-volumes" problem.
You have to define your threat vectors, for most this is either a standard compliance or more practically - if you need to RMA a failed drive, you do not want worry about what data was laying around there if you can't delete them anymore.
Indeed thats one of the usecases. Overall im most concerned about physical access to my machine which contains things like vaultwarden or syced documents with all personal data. I need to be sure that if someone steals my stuff or i'll need to leave it behind me in emergency (i live 100km from Ukrainian border) nobody will be able to access them. I can restore them later from backups kept on cloud server.
I am not sure what you are referring to, you may want to drop in some links. The only thing in relation to PVE I am aware of is:
https://bugzilla.proxmox.com/show_bug.cgi?id=5235
Oh this will be hard because those were some single posts on reddit on stack overflow regarding various file systems. I'll paste some here if i found one. But from i remember there were some complaints about stability of this solution during some updates? Like people had to re-initiate their pools from time to time?
BTRFS got some bad publicity with certain RAID (NOT mirror) setups early on or with features like quotas.
Do you even need a copy-on-write filesystem?
All i want is ability to replace failing drive without a need to reinstall or copy-and-adjusting the system. Which i have to do now unfortunately. And i want potential solution to be fairly well supported by proxmox or linux overall (no hacking with some exotic systems). That's all i need.
Yet if you can carry away the server you carry it away with that very yubikey anyhow.
I don't leave my yubikey with server. I just stick it in during reboot times and pull it back. Otherwise this whole protection with encryption doesn't make sense at all.
 
Last edited:
Heh my laptop with Kubuntu is set this way. I have two pendrives with boot partition. One is attached to my keychain and other is hidden. But i don't remember if proxmox installer provides the ability to manually set mounting points (as *buntu and other graphical installers do). Thats why i didn't even consider it. But if that's possible then yeah, this is simplest way to go.

I am not sure what's the options under "advanced/expert" install, but I anyways normally do it on top of Debian:

https://pve.proxmox.com/wiki/Install_Proxmox_VE_on_Debian_12_Bookworm

The setup explored in the forum link earlier was even using LIVE Debian first:

I am not sure what you mean, LUKS under any filesystem or volume manager is a very standard setup. You may look at some older threads and get the idea, e.g.:

https://forum.proxmox.com/threads/proxmox-8-luks-encryption-question.137150/page-2#post-611562

Comment #17 there.

And as for this bug, im not sure if i understand it correctly but i always encrypt with password only one partition. The rest of drives or partitions are encrypted with keys stored on this "first" partition. I've been always doing this this way so i didn't even know that there are other ways to deal with "multiple-password-prompts-for-multiple-encrypted-volumes" problem.

It was actually about the keyscript parameter support - which systemd does not have. I liked the flexibility (as opposed to simple keyfile). When it comes to keys however, for the truly paranoid, you may be better off storing the entire header (with the actual encryption keys) separately altogether:

https://linuxconfig.org/how-to-use-luks-with-a-detached-header

Indeed thats one of the usecases. Overall im most concerned about physical access to my machine which contains things like vaultwarden or syced documents with all personal data.

For something truly private, I would probably rely on VeraCrypt on top. BitWarden (quite important for VW) even uses this architecture by itself.

I need to be sure that if someone steals my stuff or i'll need to leave it behind me in emergency (i live 100km from Ukrainian border) nobody will be able to access them. I can restore them later from backups kept on cloud server.

The detached header comes to mind here again.

Oh this will be hard because those were some single posts on reddit on stack overflow regarding various file systems. I'll paste some here if i found one. But from i remember there were some complaints about stability of this solution during some updates? Like people had to re-initiate their pools from time to time?

Ok I can't comment on this from practical perspective - I do not have any PVE install with mdadm and host OS that I have been going through an update. From the number of posts of things going wrong during updating here I would say mdadm or not, you will have fun anyhow. I would not worry too much about this - also a reason I do not care for OS to be mirrored for a cluster node. If it fails it fails, just reinstall. If you have lots of custom setup, use ansible, then have it join the cluster (or recover /var/pve-cluster/config.db).

All i want is ability to replace failing drive without a need to reinstall or copy-and-adjusting the system. Which i have to do now unfortunately. And i want potential solution to be fairly well supported by proxmox or linux overall (no hacking with some exotic systems). That's all i need.

I think you just need a backup.

I don't leave my yubikey with server. I just stick it in during reboot times and pull it back. Otherwise this whole protection with encryption doesn't make sense at all.

Yes this would be hard in e.g. datacentre. Encryption at rest still makes sense there.

To clarify, so nobody thinks im from the „wrong side” of conflict - im from Poland.

People these days worry too much about saying things. The forum, last time I checked, was not on any "side" either.
 
People these days worry too much about saying things. The forum, last time I checked, was not on any "side" either.
Glad to heard that. Just wanted to be sure, just in case anyone who reads this topic would care.

Anyway after reading your links i've just tested this idea in Virtualbox. I will share somewhere full description but overall it looked like this:

VM: 4 core, GB RAM, 4 SATA disks attached, EFI, VT-x/AMD-V - enabled
  1. Installed Proxmox with GUI installer with mirror ZFS on /dev/sda and /dev/sdb
  2. After reboot i logged in. Created boot and EFI partition on /dev/sdc manually (with gdisk)
  3. Formatted and synced EFI partition (proxmox-boot-tool format /dev/sdc2, proxmox-boot-tool init /dev/sdc2)
  4. Both ZFS drives have their own boot and EFI partitions - i deleted them.
  5. Restarted to be sure - it failed to boot as expected. So i moved my /dev/sdc drive to the top of booting queue (now this becomes /dev/sda). This time it booted correctly
  6. Created LUKS partition on /dev/sdd with key in /boot/my.key. Created crypttab and additional fstab entry.
  7. Went to single-user mode init 1 rsynced /var to LUKS partition, removed /var/* contents. Rebooted
Done. /boot is on separate (removable) partition and /var is encrypted with a key on that partition. Yes, i didn't use real pendrive but im guessing here that this is not a problem. Also the one can clear out /etc/kernel/proxmox-boot-uuids from two old entries (deleted boot partitions in point 4.)

Poblems? Well one very annoying - Virtualbox VM keeps restarting every few (or more) minutes or so. Completely random thing. I don't know why but im guessing its someting related with proxmox working in virtualbox, not with my configuration. I haven't had such issue with any VM before.

Im just starting to mounting my drives to repeat this on real hardware. Just have to make sure first i have backups of all my containers :D

Thanks @esi_y for all very valuable inputs!
 
Glad to heard that. Just wanted to be sure, just in case anyone who reads this topic would care.

I mean, at the end of the day, the threat model is the same (for an individual) whichever "side" they are on. I just found it funny you added that a day later, but it certainly bumped up the post. ;) Personally, I sometimes follow up multiple threads and do not necessarily reply in order depending on e.g. I have time for a good reply (or test what I am advising before I post it).

Anyway after reading your links i've just tested this idea in Virtualbox.

You certainly bring up ancient memories now. :)

I will share somewhere full description but overall it looked like this:

It would be really nice of you to do this because it is then easier to point people to a tutorial-tagged answer when they are inquiring mostly about the same. What you did below is a perfect example how when different people go about doing the same, they end up doing it by different ways and that's fine - it's more interesting than everyone following one and the same line of thoughts.

VM: 4 core, GB RAM, 4 SATA disks attached, EFI, VT-x/AMD-V - enabled
  1. Installed Proxmox with GUI installer with mirror ZFS on /dev/sda and /dev/sdb
  2. After reboot i logged in. Created boot and EFI partition on /dev/sdc manually (with gdisk)
  3. Formatted and synced EFI partition (proxmox-boot-tool format /dev/sdc2, proxmox-boot-tool init /dev/sdc2)

Oh yes, this tool, I forgot it existed. Somehow I cannot get over the fact that PVE shoves /boot into an EFI partition (it's not the same like shoving keys into a /boot in my book), but that's my issue.

  1. Both ZFS drives have their own boot and EFI partitions - i deleted them.
  2. Restarted to be sure - it failed to boot as expected. So i moved my /dev/sdc drive to the top of booting queue (now this becomes /dev/sda). This time it booted correctly

I would need to check what everything the tool actually does after which command (for a systemd-boot), but it should have efibootmgr'd it, or so I would have thought.

  1. Created LUKS partition on /dev/sdd with key in /boot/my.key. Created crypttab and additional fstab entry.
  2. Went to single-user mode init 1 rsynced /var to LUKS partition, removed /var/* contents. Rebooted

Hm, I thought you would have gone all the way. At the end of the day, you have no control over what gets shoved around into /tmp or /opt later on, but of course concept-wise it's all the same.

I just want to point out (if anyone follows this on an existing install), that moving something away from a plaintext partition does not remove the data there. :)

Done. /boot is on separate (removable) partition and /var is encrypted with a key on that partition.

Right but then the /var is not mirrored now, yeah? :D

Poblems? Well one very annoying - Virtualbox VM keeps restarting every few (or more) minutes or so. Completely random thing. I don't know why but im guessing its someting related with proxmox working in virtualbox, not with my configuration. I haven't had such issue with any VM before.

I really have no idea, but if you are running this on a Linux machine, give virt-manager/libvirt a try instead in 2024 maybe.

Im just starting to mounting my drives to repeat this on real hardware. Just have to make sure first i have backups of all my containers :D

I just think the non-mirrored /var (if it was not just PoC) is not making sense alongside the mirrored /'s.

Thanks @esi_y for all very valuable inputs!

You're welcome. Btw just yesterday there was another discussion on the forum and I found:
https://www.dwarmstrong.org/debian-install-zfs/

Which, as much as you know I am not a fan of ZFS native encryption, is a great exercise for a truly ZFS centric setup. ;)
 
Last edited:
Hm, I thought you would have gone all the way. At the end of the day, you have no control over what gets shoved around into /tmp or /opt later on, but of course concept-wise it's all the same.

I just want to point out (if anyone follows this on an existing install), that moving something away from a plaintext partition does not remove the data there. :)

Right but then the /var is not mirrored now, yeah? :D

I just think the non-mirrored /var (if it was not just PoC) is not making sense alongside the mirrored /'s.
Ok because you've triggered my anxiety :D i wanted to try FDE. Offcourse i had to switch to Debian to do so. So i tried to install it in two ways

1. LUKS@MDADM
Well this was really smooth. Everything was done in Debian's installer GUI - in its partition manager. As before i've set /boot and efi partitions on separte drive (sda). Then i created mirror mdadm from sdb and sdc, created luks partition on top, created a partition on top of luks and mounted it to root. I forgot to add keyfile but overall all works well. I didn't want to continue with proxmox install because i wanted to try second method

2. BTRFS@LUKS
Now this is little bit more tricky and triggered unexpected problem which im stuck with. At first it all went smoothly - again i created boot and efi partitions on sda, LUKS partitions on sdb and sdc and root filesystem with btrfs but only on sdb1. Up to this point all went smoothly - system boots with a password prompt, i've made VM snapshot

1723478557480.png

At first i tried to just add second drive to btrfs pool and make it raid1 as described here
https://archive.kernel.org/oldwiki/...g_Btrfs_with_Multiple_Devices.html#Conversion

Code:
btrfs device add /dev/sdc1 /
btrfs balance start -dconvert=raid1 -mconvert=raid1 /

But after reboot i got ...
1723478780466.png

I guessed that this is because lack of second drive (but not sure since raid1 should work in such cases - thats the point of it right?). So i restored snapshot. And this time i created /boot/my.key again, added it with cryptsetup luksAddkey for both partitions and added (uncommented) it in crypttab. Suprisingly it did not work as it was in my first solution - reboot always gave me password prompt. I had to do update-initramfs -uto make it work. Why?

Anyway after rebot this time i got this

1723479072768.png

UUID is id of btrfs partition/pool. It could not decrypt it. But if i issue there luksOpen /dev/sdb1 sdb1_crypt and then 'exit' commands it continues to boot. Also switching back to password fixes this problem

So im totally confused now:
1. Why i need initramfs update to switch password to key decyrption?
2. Why it cant decrypt root with key file but can with password prompt? /boot partition is not mounted yet or what? Seems like proxmox uses different mounting method (there almost no fstab entries there) than pure debian.
3. Why btrfs raid1 fails?

EDIT
As for your last link - well, debootstrap install always seemed to me the most complicated method. The days when i was studying every line of Arch install instructions are over and nowadays i would prefer avoid setting up every little piece of system manually :D
 
Last edited:
Just so I do not take a day to reply, at first glance ...

2. BTRFS@LUKS
Now this is little bit more tricky and triggered unexpected problem which im stuck with. At first it all went smoothly - again i created boot and efi partitions on sda, LUKS partitions on sdb and sdc and root filesystem with btrfs but only on sdb1.

I am getting lost in the terminology, if you create BTRFS over LUKS, I hope it's over e.g. sdb1_crypt.

At first i tried to just add second drive to btrfs pool and make it raid1 as described here
https://archive.kernel.org/oldwiki/...g_Btrfs_with_Multiple_Devices.html#Conversion

I have never done this (note I do use and like BTRFS, just saying can't comment on this until tested by myself), but I would typically pre-partition everything in Debian LIVE and then (if you insist on the installer) just set the mount points, especially with crypts the regular install always worked weird, even just adding LVM over a crypt was madness.

Code:
btrfs device add /dev/sdc1 /
btrfs balance start -dconvert=raid1 -mconvert=raid1 /

But again I wonder where's the crypt ...

I had to do update-initramfs -uto make it work. Why?

Generally, do you have an overview how initramfs works when handed over from bootloader and that when you make changes that should apply at both, they need to be in the initramfs to be effective?

I will react on the rest later (maybe you add something more in the meantime;)).

PS debootrap is not the "most complicated" - it is where you actually can see full well what is doing what :)
 
I am getting lost in the terminology, if you create BTRFS over LUKS, I hope it's over e.g. sdb1_crypt.
...
But again I wonder where's the crypt ...
...
Yes, im sorry, indeed i was ment /dev/mapper/sdb1_crypt and /dev/mapper/sdc1_crypt

Ok i got luks keys working with help of this
https://unix.stackexchange.com/ques...d-debian-root-with-key-file-on-boot-partition

I moved my key from /boot to /etc/luks-key/my.key, uncommented KEYFILE_PATTERN in /etc/cryptsetup-initramfs/conf-hook and set it to /etc/luks-key/my.key, fixed crypttab keyfiles path and run update-initramfs again.

And it works ... almost.

I see that only first disk (mentioned in fstab) is decrypted in initramfs. But second one is not, which causes the same btrfs error i showed before

1723535527951.png

EDIT: ok i've added initramfs option to both crypttab entries and this time both drives are decrypted at initramfs stage. I didn't know about this option, i must read about it more.

So yeah it works. Yet i still bugs me why it does not work when btrfs RAID1 is degraded? I even simulated this again by disabling one of the drives. Shouldn't it just boot but with some warnings?

EDIT2: Ok it seems like BTRFS need special kernel parameter to behave such way. Which is somewhat reasonable ...
https://forum.proxmox.com/threads/btrfs-raid1-totally-useless.124075/
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!