Encrypt ZFS pool

Thanks! What about migration, should also be not possible right? when moving a vm from a encrypted pool to a different node with another encrypted pool?
If you are trying to move from a ZFS encrypted pool to another ZFS encrypted pool, be it the boot pool or otherwise, I believe it is not possible, but I have never used ZFS migration or replication in any of my setups, as I only use ZFS for the boot-pool "rpool" and do not store any other data other than the Proxmox VE installation on it, so you might have to build some trial systems and see what works and does not for replication.
 
  • Like
Reactions: jsterr
If you are trying to move from a ZFS encrypted pool to another ZFS encrypted pool, be it the boot pool or otherwise, I believe it is not possible, but I have never used ZFS migration or replication in any of my setups, as I only use ZFS for the boot-pool "rpool" and do not store any other data other than the Proxmox VE installation on it, so you might have to build some trial systems and see what works and does not for replication.

Thanks! Will the pool be auto-encrypted on boot or is it possible to encrypt it manually?
 
Thanks! What about migration, should also be not possible right? when moving a vm from a encrypted pool to a different node with another encrypted pool?
I did that for a while. To make it work, I changed the following file:

Code:
nano /usr/share/perl5/PVE/Storage/ZFSPoolPlugin.pm
 
- my $cmd = ['zfs', 'send', '-Rpv'];
+ my $cmd = ['zfs', 'send', '-Rpvw'];

But after a while, there were damaged volumes. So I disabled it again. An alternative would be to use LUKS under ZFS. Many people do this, and it is technically legitimate. However, I have not been able to get used to it yet, so I have never looked into it in detail.
 
  • Like
Reactions: jsterr
I have adapted the instructions a little to fit my needs, but the previous instructions should work as my adaptations were based on them and a few other resources I found online. As for replication, it should only affect encrypted pools, but I have not tested adding another non-encrypted pool and using replication, as I use CEPH and which works with the built-in CEPH encryption with Proxmox, so my boot drives are solely responsible for booting and running Proxmox, and all other storage is either from another device or using CEPH.

Here are the instructions I have created and use: (just built a new system a few days ago)
  1. Once the installation summary screen is shown, remove the check-mark from the box at the bottom labelled "Automatically reboot after successful installation" and then click the button in the bottom right corner labelled "Install".
  2. After the installation has completed, press the following key combination "Ctrl + Alt + F3" to be taken to a command prompt to complete the next steps.
  3. Using the following commands, you will encrypt the node's root dataset:
    1. zpool import -f rpool;
    2. zfs snapshot -r rpool/ROOT@copy;
    3. zfs send -R rpool/ROOT@copy | zfs receive rpool/copyroot;
    4. zfs destroy -r rpool/ROOT;
    5. zpool set autoexpand=on rpool;
    6. zpool set autotrim=on rpool;
    7. zfs create -o compression=lz4 -o checksum=on -o encryption=on -o keyformat=passphrase rpool/ROOT;
    8. zfs send -R rpool/copyroot/pve-1@copy | zfs receive -o encryption=on rpool/ROOT/pve-1;
    9. zfs destroy -r rpool/copyroot;
    10. zfs destroy rpool/ROOT/pve-1@copy;
    11. zfs set mountpoint=/ rpool/ROOT/pve-1;
    12. zpool export rpool;
  4. Once the commands have been completed, press the following key combination "Ctrl + Alt + F4" to be taken back to the GUI installer.
  5. You can now reboot the node by clicking the button labelled "Reboot" in the bottom right corner.

Thank you, I tried this today and it just worked! :cool: Any things I should take care off,replacing a disk is the same as without encryption correct? Updates/Upgrades do not need any additional configuration or cause issues?
 
Last edited:
But after a while, there were damaged volumes.
I ran into the same problem. It just popped up one day and after disabling replication and repairing the datasets manually, it went away:

Code:
root@proxmox ~ > zpool status -v
  pool: rpool
 state: ONLINE
status: One or more devices has experienced an error resulting in data
        corruption.  Applications may be affected.
action: Restore the file in question if possible.  Otherwise restore the
        entire pool from backup.
   see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-8A
  scan: scrub repaired 0B in 00:40:49 with 0 errors on Mon Feb 10 07:52:53 2025
config:

        NAME        STATE     READ WRITE CKSUM
        rpool       ONLINE       0     0     0
          mirror-0  ONLINE       0     0     0
            sdb3    ONLINE       0     0     0
            sda3    ONLINE       0     0     0

errors: Permanent errors have been detected in the following files:

[...]
/rpool/encryption/data/subvol-1099-disk-0/bin/bash
[...]

Still nothing you want to use in production. The bug itself is only triggered if you replicate the dataset, yet as far as I went down in the rabbit hole, it was not solved at the time I looked back in the beginning of 2025. Using LUKS works and NOT using zfs send works however.
 
Thank you, I tried this today and it just worked! :cool: Any things I should take care off,replacing a disk is the same as without encryption correct? Updates/Upgrades do not need any additional configuration or cause issues?
I have not yet had to replace a disk in either of my machines using encrypted boot disks. For updates, they have also worked without issues.


@LnxBil - I don't use replication on my systems, as I either use a network-based share or Ceph for my installation. The system disk(s) are only for the Proxmox install and occasionally a few small VMs that are backed up to another server running PBS.