Full Disk Encryption with ZFS using Proxmox installer

LunaXQ

New Member
Mar 11, 2022
15
2
3
Hello

I've just been checking out the latest release of Proxmox and seen the options for installing with ZFS which is brilliant to see. I'm wondering if there's a way to adjust the parameters the Proxmox installer is using when choosing to install using ZFS so i can pass in the options for using ZFS's disk encryption?

I know i can setup debian using the guides on openzfs and then install Proxmox manually on top of that but i'm wondering if there's a way to do it with the Proxmox installer?
 
Last edited:
  • Like
Reactions: LunaXQ
Thanks for the heads up, In my particular use case clustering and migrations won't be too much of an issue. I did find this guide and followed the instructions after doing a fresh install of proxmox v7.4 and selecting raid-z3 during the install process.

https://gist.github.com/yvesh/ae77a68414484c8c79da03c4a4f6fd55

The only thing I did differently was boot into an ubuntu live session and install the zfsutils-linux package to make the changes to the filesystem but other then that the instructions worked perfectly. Now proxmox prompts me for a password on boot to decrypt rpool which I can enter over IPMI via a separate administration only LAN network.

This is exactly what I was looking for, thank's so much for you help on this.
 
You can also set up dropbear-initramfs. With that you can unlock the rpool using SSH. I prefer that because the webKVM of my BMC won't allow me to paste the password from my password manager.
 
  • Like
Reactions: LunaXQ
You can also set up dropbear-initramfs. With that you can unlock the rpool using SSH. I prefer that because the webKVM of my BMC won't allow me to paste the password from my password manager.
I have heard of dropbear, will be good to try it out as typing in the passwords is a pain and my BMC IPMI also won't let me paste the password either. This server is just a test at the moment so it's not using a strong password and will be wiped clean before being deployed.
 
So after having some time to look into things some more the flaw with the method I used above is that only the root partition for the proxmox install is encrypted. The VM data is still not encrypted and thus isn't protected at rest, when running the following command I get this output:
Code:
root@proxmox:~# zfs get encryption
NAME                      PROPERTY    VALUE        SOURCE
rpool                     encryption  off          default
rpool/ROOT                encryption  aes-256-gcm  -
rpool/ROOT/pve-1          encryption  aes-256-gcm  -
rpool/ROOT/pve-1@copy     encryption  aes-256-gcm  -
rpool/data                encryption  off          default
rpool/data/vm-100-disk-0  encryption  off          default
rpool/data/vm-101-disk-0  encryption  off          default
rpool/data/vm-102-disk-0  encryption  off          default

I've tried adjusting the original commands mentioned in the github link and run into problems getting that to work.

I've installed ubuntu and debian with encrypted ZFS root partitions previously and also noticed there's no bpool for the boot partition so presumably proxmox maintains the boot partition across all the drives manually when updating the kernel instead of relying on ZFS to do this?

What are the flaws with using the instructions provided here ( https://openzfs.github.io/openzfs-docs/Getting Started/Debian/Debian Bullseye Root on ZFS.html ) and then installing proxmox manually on top of debian?

Currently reinstalling the server to try this approach and seeing what results i get and will update this post.
 
I've tried adjusting the original commands mentioned in the github link and run into problems getting that to work.
Isn't that hard. I usually do it like this:
1.) install PVE using ZFS
2.) boot a ZFS capable live Linux (I use the PVE ISO for that, enter install in debug mode and run "exit" to get to the shell)
3.) then I encrypt my "pve-1" dataset (thats the root filesystem that also contains the "local" storage) and the "data" dataset (used by the "local-zfs" storage) using this:
  • Import ZFS pool: zpool import -f rpool
  • snapshot rpool/ROOT: zfs snapshot -r rpool/ROOT@copy
  • create copy of unencrypted rpool/ROOT and all childs: zfs send -R rpool/ROOT@copy | zfs recv rpool/copyroot
  • destroy unencrypted rpool/ROOT: zfs destroy -r rpool/ROOT
  • create new encrypted rpool/ROOT: zfs create -o encryption=aes-256-gcm -o keyformat=passphrase rpool/ROOT
  • copy and encrypt unencrypted rpool/copyroot/pve-1: zfs send -R rpool/copyroot/pve-1@copy | zfs recv -o encryption=on rpool/ROOT/pve-1
  • destroy copy: zfs destroy -r rpool/copyroot
  • destroy snapshots: zfs destroy rpool/ROOT/pve-1@copy
  • data should be empty so you could destroy it: zfs destroy rpool/data If it is not a new PVE installation you might want to snapshot and copy it first like done above with the "ROOT" dataset so you don't lose the data on it
  • create new encrypted rpool/data: zfs create -o encryption=aes-256-gcm -o keyformat=passphrase rpool/data
  • export pool: zpool export rpool
  • reboot
I've installed ubuntu and debian with encrypted ZFS root partitions previously and also noticed there's no bpool for the boot partition so presumably proxmox maintains the boot partition across all the drives manually when updating the kernel instead of relying on ZFS to do this?
Yes, PVE uses the proxmox-boot-tool to sync the bootloader. Its on partition 2 and ZFS is on partition 3.
 
Last edited:
So i think the step i was previously struggling with was the rpool/data, it was much easier to destroy it and re-create it. However i'm not sure if i've made another mistake but it seems like proxmox doesn't unlock rpool/data when i type my password in at boot, only rpool/ROOT.

When creating a VM in the proxmox gui i get the following error:

Code:
TASK ERROR: unable to create VM 100 - zfs error: cannot create 'rpool/data/vm-100-disk-0': encryption root's key is not loaded or provided
 
You need to unlock both of them typing in the password twice. Like when running a zfs load-key -a. Or you encrypt your rpool/data with a keyfile instead of a passphrase and then create a systemd service to unlock all keyfile encrypted datasets and store that keyfile somewhere on that encrypted rpool/ROOT/pve-1.

Here is like I do it:

Some useful options I usually do:
  • enable relatime: zfs set relatime=on rpool
  • enable autotrim: zpool set autotrim=on rpool
  • set quota so pool can never be filled completely up by accident (here 100GB but chose something like 90% of your usable capacity): zfs set quota=100G rpool

create new encrypted dataset on rpool:
  • create hidden keys folder:
    Code:
    mkdir /root/.keys
    chown root:root /root/.keys
    chmod 740 /root/.keys
  • create keyfile:
    Code:
    openssl rand -hex -out /root/.keys/rpool_vault.key 32
    chown -R root:root /root/.keys/rpool_vault.key
    chmod -R 740 /root/.keys/rpool_vault.key
    BACKUP KEY FILE!!!
  • create encrypted dataset: zfs create -o encryption=aes-256-gcm -o keyformat=hex -o keylocation=file:///root/.keys/rpool_vault.key rpool/vault

  • create new dataset for ISOs/snippets/templates: zfs create -o compression=zstd rpool/vault/data
  • add new Directory storage for ISOs/snippets/templates: pvesm add dir data --is_mountpoint 1 --path /rpool/vault/data --content vztmpl,snippets,iso --shared 0
  • create a new dataset for VMs: zfs create rpool/vault/VM8K
  • add new ZFSpool storage for VMs: pvesm add zfspool VM8K --blocksize 8K --content images --pool rpool/vault/VM8K --sparse 1 --mountpoint /rpool/vault/VM8K
  • create a new dataset for LXCs: zfs create -o recordsize=128K rpool/vault/LXC128K
  • add new ZFSpool storage for LXCs: pvesm add zfspool LXC128K --blocksize 8K --content rootdir --pool rpool/vault/LXC128K --sparse 1 --mountpoint /rpool/vault/LXC128K

Create service to auto unlock keyfile encrypted ZFS pools after boot​

  • create service: nano /etc/systemd/system/zfs-load-key.service
    Add there:
    Code:
    [Unit]
    Description=Load encryption keys
    DefaultDependencies=no
    After=zfs-import.target
    Before=zfs-mount.service
    
    [Service]
    Type=oneshot
    RemainAfterExit=yes
    ExecStart=/usr/bin/zfs load-key -a
    StandardInput=tty-force
    
    [Install]
    WantedBy=zfs-mount.service
  • enable service: systemctl enable zfs-load-key.service

configure ZFS root unlocking through SSH​

  • install packages: apt update && apt install dropbear-initramfs busybox
  • add pub key to dropbear: nano /etc/dropbear-initramfs/authorized_keys
    Paste your pub key there in a single line.
  • edit initramfs-dropbear config: nano /etc/dropbear-initramfs/config
    Change
    #DROPBEAR_OPTIONS=
    to
    DROPBEAR_OPTIONS="-p 10022 -j -k -c zfsunlock"
  • run: nano /etc/initramfs-tools/initramfs.conf
    Add at the bottom:
    IP=192.168.43.22::192.168.43.1:255.255.255.0:PVEUnlock:eno1:off:192.168.43.1
    In this case my NIC is eno1, IP for unlocking is 192.168.43.22, gateway and DNS are 192.168.43.1, Subnetmask is 255.255.255.0 and the host is called PVEUnlock.
  • rebuild initramfs: update-initramfs -u
  • The root filesystem can then be unlocked using SSH on IP 192.168.43.22 with port 10022 by logging in as root with the private keyfile
 
Your solution worked perfectly, I'm prompted for the password twice at boot but honestly I'm fine with that. I created the .service file and enabled it which worked a treat.
The quota command is also really helpful as i'm running on SSD's and would like to leave a certain % of space free on the disk.

For me running zfs get relatime showed relatime was already set to on.

I haven't tried it with dropbear yet, for ISO's i'm just going to use an additional ssd drive encrypted with LUKS and unlocked during boot using a keyfile stored on the root filesystem, i'm not too concerned about loosing all the data on that disk as it's easily replaceable but want the disk encrypted so the ISO images can't be modified if someone plugs the drive into another machine.

Thank you so much for all of your help, it's been a really easy process to get thing's running and much simpler then installing encrypted debian with proxmox installed manually on top.
 
Thanks for the instructions!
I followed pve-1 dataset encryption (root filesystem) and also for rpool/data. Then I set up zfs root unlocking through SSH/dropbear which also works.

1.) I don't understand why I am doing "zfs create -o encryption=aes-256-gcm -o keyformat=passphrase rpool/data" and not getting asked for another key entry later when starting pve up, additionally to the root unlock key entry at boot. Is it because I gave the same key value for testing reasons?

2.) Why I am getting a warning and an error when update-initramfs -u?

Code:
root@pve:~# update-initramfs -u
update-initramfs: Generating /boot/initrd.img-6.2.16-3-pve
cryptsetup: ERROR: Couldn't resolve device rpool/ROOT/pve-1
cryptsetup: WARNING: Couldn't determine root device
Running hook script 'zz-proxmox-boot'..
Re-executing '/etc/kernel/postinst.d/zz-proxmox-boot' in new private mount namespace..
Copying and configuring kernels on /dev/disk/by-uuid/XXXX-XXXX
        Copying kernel and creating boot-entry for 6.2.16-3-pve
Copying and configuring kernels on /dev/disk/by-uuid/XXXX-XXXX
        Copying kernel and creating boot-entry for 6.2.16-3-pve
 
I haven't tried this yet with dropbear so i'm not sure if this will help you with the first problem you're having. When i setup the encrypted rpool/data i had to create the following systemd service before proxmox would prompt me to unlock rpool/data.

Create the file /etc/systemd/system/zfs-load-key.service and add the following:

Code:
[Unit]
Description=Load encryption keys
DefaultDependencies=no
After=zfs-import.target
Before=zfs-mount.service

[Service]
Type=oneshot
RemainAfterExit=yes
ExecStart=/usr/bin/zfs load-key -a
StandardInput=tty-force

[Install]
WantedBy=zfs-mount.service

Then run the following command and reboot.

Code:
systemctl enable zfs-load-key

You should now be prompted to unlock rpool/data during the boot process. For me this works fine at the moment for testing purposes but let me know if this works with dropbear on your setup.

I hope this helps :)
 
Last edited:
  • Like
Reactions: jAston
Decided for new encrypted datasets with hidden key folder and the zfs-load-key.service.
That results in a single password query on PVE boot (root fs), then subsequently the new datasets get mounted.
As a last step then following "configure ZFS root unlocking through SSH" makes all this work flawlessly with key entry via ssh.
Some paths changed in recent dropbear version though, so the tutorial above needs to be interpreted as needed.
I try to find time to document all steps from PVE installation all the way to Dropbear setup in the next days, so there is one more coherent guide of this whole approach.
 
  • Like
Reactions: LunaXQ
Decided for new encrypted datasets with hidden key folder and the zfs-load-key.service.
That results in a single password query on PVE boot (root fs), then subsequently the new datasets get mounted.
As a last step then following "configure ZFS root unlocking through SSH" makes all this work flawlessly with key entry via ssh.
Some paths changed in recent dropbear version though, so the tutorial above needs to be interpreted as needed.
I try to find time to document all steps from PVE installation all the way to Dropbear setup in the next days, so there is one more coherent guide of this whole approach.
Glad you got it all working :) Documentation would be great as i'd like to learn more about dropbear and how you set everything up.

I also thought about putting the key for the rpool/data dataset on the root pool so i didn't need to enter two passwords but in my case I didn't want any keys for decrypting drives on the filesystem anywhere otherwise that's a great solution.
 
1.) I don't understand why I am doing "zfs create -o encryption=aes-256-gcm -o keyformat=passphrase rpool/data" and not getting asked for another key entry later when starting pve up, additionally to the root unlock key entry at boot. Is it because I gave the same key value for testing reasons?
Because the root filesystem ("pve-1" dataset) is encrypted with a passphrase you type in SSH/console when booting the node. This root filesystem will then be decrypted. On this now decrypted root filesystem is the keyfile stored needed to unlock the "data" dataset. The systemd service will try to unlock all datasets that are encrypted while booting up. If a passphrase is missing it will ask you to type it in. If a keyfile is given it won't ask and just load the key from that keyfile. Without typing in the root filesystems passphrase the keyfile will be encrypted too so both "data" and "pve-1" will be encrypted.
I've chosen this so you don't have to type in two passphrases when booting up the server.

2.) Why I am getting a warning and an error when update-initramfs -u?
That is normal and can be ignored. The ZFS documentation tells you to do this in case you don't want to see these warnings but then unlocking LUKS through dropbear won't work:
https://openzfs.github.io/openzfs-docs/Getting%20Started/Debian/Debian%20Bullseye%20Root%20on%20ZFS.html#step-4-system-configuration said:
You may wish to uninstall the cryptsetup-initramfs package to avoid warnings.

You should now be prompted to unlock rpool/data during the boot process. For me this works fine at the moment for testing purposes but let me know if this works with dropbear on your setup.
Yes, but only when using a passphrase for "data". When using a keyfile it will just load the key from that file without needing to ask you to type it in.
As long as you store the keyfile on an encrypted location (like the encrypted root filesystem) it's fine to use the keyfile for "data".
 
Last edited:
  • Like
Reactions: LunaXQ
Shout out to @Dunuin, I am a newbie trying to follow your foot step in building a PVE with ZFS FDE, but I find it very hard to piece together the information from multiple users, across multiple threads, on multiple sites....you mentioned in another post that you were writing a detailed tutorial, I am wondering if you have posted it somewhere or is it still in development?
 
Shout out to @Dunuin, I am a newbie trying to follow your foot step in building a PVE with ZFS FDE, but I find it very hard to piece together the information from multiple users, across multiple threads, on multiple sites....you mentioned in another post that you were writing a detailed tutorial, I am wondering if you have posted it somewhere or is it still in development?
The tutorial got very long (40 pages or soo_O because I explained a lot of ZFS basics). Then I stopped writing because I had other things to do and meanwhile I reinstalled my PVE servers and did the encryption a bit differently so some things would have to be updated. I still need to find some time to edit the xisting chapters of the tutorial and write the last few missing chapters. There were also some problems no one found a good solution but I still think are important. Like that it looks like there no option to have a mirrored swap without hardware raid and I would really like to cover an encrypted mirrored swap partition.
 
The tutorial got very long (40 pages or soo_O because I explained a lot of ZFS basics). Then I stopped writing because I had other things to do and meanwhile I reinstalled my PVE servers and did the encryption a bit differently so some things would have to be updated. I still need to find some time to edit the xisting chapters of the tutorial and write the last few missing chapters. There were also some problems no one found a good solution but I still think are important. Like that it looks like there no option to have a mirrored swap without hardware raid and I would really like to cover an encrypted mirrored swap partition.
I followed your posts and got my encrypted zpool and dropbear configured. Look forward to you comprehensive guide!

I run in to problem when adding a 2nd zpool with encrypted dataset, the GUI recognises the 2nd dataset, but cannot create VM disks on it.
TASK ERROR: unable to create VM 103 - zfs error: cannot create 'rpool_sn640/data/vm-103-disk-0': encryption root's key is not loaded or provided

The encrypted dataset on 2nd pool uses the same key file as the 1st pool. The history for the second pool:
Code:
History for 'rpool_sn640':
2024-01-27.13:26:01 zpool create rpool_sn640 nvme0n1
2024-01-27.13:32:50 zfs set compression=lz4 rpool_sn640
2024-01-27.13:34:08 zfs create -o encryption=aes-256-gcm -o keyformat=hex -o keylocation=file:///root/.keys/rpool_data.key rpool_sn640/data
2024-01-27.13:34:50 zfs set relatime=on rpool_sn640
2024-01-27.13:35:06 zpool set autotrim=on rpool_sn640
2024-01-27.13:39:38 zfs set quota=6.2T rpool_sn640
2024-01-27.13:40:48 zfs set quota=6.3T rpool_sn640
2024-01-27.13:55:48 zpool import -c /etc/zfs/zpool.cache -aN
2024-01-27.13:55:48 zfs load-key -a
2024-01-28.13:28:16 zpool import -c /etc/zfs/zpool.cache -aN
2024-01-28.13:28:16 zfs load-key -a
2024-02-15.03:25:44 zpool import -d /dev/disk/by-id/ -o cachefile=none rpool_sn640
2024-02-15.09:41:23 zpool import -d /dev/disk/by-id/ -o cachefile=none rpool_sn640
2024-02-15.12:51:59 zpool import -d /dev/disk/by-id/ -o cachefile=none rpool_sn640
2024-02-15.13:00:21 zpool import -d /dev/disk/by-id/ -o cachefile=none rpool_sn640
Any idea how to fix this?
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!