ZFS pool and encryption

macamba

Well-Known Member
Mar 8, 2011
85
5
48
A few questions on ZFS pool creation and encryption. I read the instructions on encryption but I am not sure about the tank/encrypted_data part in the wiki. So better ask.
1) To create the pool I issued 'zpool create -f -o ashift=12 zroot mirror sda sdb -m /mnt/zroot'. I added -m /mnt/zroot since I want the pool mounted in /mnt and not in / (root)

2) For encrypting the pool I plan to issue: 'zfs create -o encryption=on -o keyformat=passphrase zroot/'.
The wiki says 'tank/encrypted_data'. I assume tank is the used pool name in the wiki?
Actually I want to encrypt everthing, but it doesn't accept 'zroot/' . Is that possible or do I have to create a folder like 'zroot/encrypted_data'?

3) Issue: 'pvesm add zfspool encrypted_zfs -pool zroot/encrypted_data'

4) Load encrypted pool: 'zfs load-key zroot/encrypted_data'

Are these steps correct? The pool is by the way used for storage of data and possible VM's and containers. Proxmox runs from M2 SSD and I plan to add another SSD for caching and logging later on.

UPDATE:
I tried the above so with 'zroot/encrypted_data' instead of 'zroot/' for now and put a test file in the 'encrypted_data' folder. The strange thing is I can still read the file after reboot without reloading the encrypted pool by entering the password ? Is that normal? How can I than test whether the encryption is working without putting the disks in a different computer?
 
Last edited:
Hi,
To create the pool I issued 'zpool create -f -o ashift=12 zroot mirror sda sdb -m /mnt/zroot'. I added -m /mnt/zroot since I want the pool mounted in /mnt and not in / (root)
the default mount point is not '/'(root) it is the name of the pool.
So in your case '/zroot'.

The wiki says 'tank/encrypted_data'. I assume tank is the used pool name in the wiki?
Yes, this is correct.

Actually I want to encrypt everthing, but it doesn't accept 'zroot/' . Is that possible or do I have to create a folder like 'zroot/encrypted_data'?
This is no folder it is a Dataset.
You can encrypt a dataset only at creation time and it is not changeable later.

The strange thing is I can still read the file after reboot without reloading the encrypted pool by entering the password ?
What you can read?
 
Hi,

the default mount point is not '/'(root) it is the name of the pool.
So in your case '/zroot'.


Yes, this is correct.


This is no folder it is a Dataset.
You can encrypt a dataset only at creation time and it is not changeable later.


What you can read?
I can read the text file in the encrypted_data data set.

But let me ask the question differently.

1) Do I have to create data sets on the pool? Or is the pool ‘zroot’ in my case directly suitable for storage of files?

2) Can I encrypt the pool entirely?
 
Yes, you have to create a dataset on this pool and encrypt this.


No
Thanks, I think I am getting closer to understand. So also for unencrypted data I need to create datasets. The hierarch is as follow:
- Disks
- Pool
- Dataset
- Files and folders

Correct?

Additionally, how does it come I don’t have to enter password when rebooting system? The files in encrypted_data are straight way readable unencrypted?
 
- Disks
- Pool
- Dataset
- Files and folders

Correct?
Correct but in case of KVM images the datasets are zvols (blockdev).
And files and folders are in the emulated blockdev.

Additionally, how does it come I don’t have to enter password when rebooting system? The files in encrypted_data are straight way readable unencrypted?
This is not normally the case.
Check wit the zfs command if the dataset is encrypted.
 
Correct but in case of KVM images the datasets are zvols (blockdev).
And files and folders are in the emulated blockdev.


This is not normally the case.
Check wit the zfs command if the dataset is encrypted.

zvol blockdevices as well as emulated blockdevices can both be encrypted files systems?
 
Sorry but I don't understand where the difference lays between a 'zvol blockdevice' and an 'emulated block device' when creating data sets/ file systems. Also I don't understand at which point the zfs storage is ready for actually storing data on it.

Please help whether below steps are complete?

1) Create a raid-1 pool:
#zpool create -f -o ashift=12 zroot mirror sda sdb -m /mnt/zroot'. I added -m /mnt/zroot since I want the pool mounted in /mnt and not in /zroot

2) Enable the encryption feature for the pool
#zpool set feature@encryption=enabled

3) Create an encrypted file system for storing VM's/ LXC's:
# zfs create -o encryption=on -o keyformat=passphrase zroot/dproxmox-data

# pvesm add zfspool encrypted_zfs -pool zroot/proxmox-data

# zfs load-key zroot/proxmox-data

Is zroot/proxmox at this point ready for storing VM's and LXC's?

4) Create an encrypted file system for storing data (doc's, pictures, video's etc.)
#zfs create -o encryption=on -o keyformat=passphrase zroot/data

# zfs load-key zroot/data

Is zroot/data at this point ready for storing data?

Is this part “# pvesm add zfspool encrypted_zfs -pool zroot/proxmox-data” then what differentiates a zvol blockdevice from emulated block device?

Finally, can ‘dataset’ level be shared across the network based on CIFS, NFS, and AFP?
 
Last edited:
Hi,
Sorry but I don't understand where the difference lays between a 'zvol blockdevice' and an 'emulated block device' when creating data sets/ file systems. Also I don't understand at which point the zfs storage is ready for actually storing data on it.
In ZFS you can create filesystems and volumes (and snapshots), the word 'dataset' refers to all of those. ZFS filesystems have a regular filesystem interface with directories, files and attributes, while volumes (zvols) are virtual blockdevices. A 'zvol blockdevice' is an 'emulated block device'. A VM can then create a filesystem (has nothing to do with the ZFS of the host anymore) on the virtual blockdevice and store its files and directories in that filesystem.

Please help whether below steps are complete?

1) Create a raid-1 pool:
#zpool create -f -o ashift=12 zroot mirror sda sdb -m /mnt/zroot'. I added -m /mnt/zroot since I want the pool mounted in /mnt and not in /zroot

2) Enable the encryption feature for the pool
#zpool set feature@encryption=enabled

3) Create an encrypted file system for storing VM's/ LXC's:
# zfs create -o encryption=on -o keyformat=passphrase zroot/dproxmox-data

# pvesm add zfspool encrypted_zfs -pool zroot/proxmox-data

# zfs load-key zroot/proxmox-data

Is zroot/proxmox at this point ready for storing VM's and LXC's?

Yes, you should be able to select the storage on VM/LXC creation when you get to the 'Hard Disk' resp. 'Root Disk' tab. The virtual disks for VMs will be created as ZFS volumes and for LXC as ZFS filesystems. The encryption is inherited by all datasets from the parent filesystem, i.e. zroot/dproxmox-data.

4) Create an encrypted file system for storing data (doc's, pictures, video's etc.)
#zfs create -o encryption=on -o keyformat=passphrase zroot/data

# zfs load-key zroot/data

Is zroot/data at this point ready for storing data?

Yes. It has to be mounted, but that should happen automatically.

Is this part “# pvesm add zfspool encrypted_zfs -pool zroot/proxmox-data” then what differentiates a zvol blockdevice from emulated block device?

Finally, can ‘dataset’ level be shared across the network based on CIFS, NFS, and AFP?

While the keys are loaded you can share directories on your ZFS filesystem just as you would share directories normally.
 
  • Like
Reactions: jimi and kwinz
Okay, but doesn't it work then also when i create a dataset 'data' for all my data incl. virtual machines, where I put the virtual machines in a separate folder in the 'data' dataset. Why is it necessary for making a dedicated dataset for virtual machines?
 
How is it possible that the content (I created some directories in it for testing) of my encrypted dataset 'zpool/edata' is immediately vieweable after reboot (also after shutdown) with reloading the key? As you said I checked the encryption status with 'zfs get all zpool/edata | more' and the encryption properties seem correct, i.e.:
encryption aes-256-ccm
keylocation prompt
keyformat passphrase
encryptionroot zpool/edata
keystatus unavailable

The test directories I created in the dataset are vieweable without loading the keys?
 
How is it possible that the content (I created some directories in it for testing) of my encrypted dataset 'zpool/edata' is immediately vieweable after reboot (also after shutdown) with reloading the key? As you said I checked the encryption status with 'zfs get all zpool/edata | more' and the encryption properties seem correct, i.e.:
encryption aes-256-ccm
keylocation prompt
keyformat passphrase
encryptionroot zpool/edata
keystatus unavailable

The test directories I created in the dataset are vieweable without loading the keys?


Do you mean
... viewable without reloading the key...
?

What were the exact commands you used to create the directories? Was zpool/edata mounted at that time? Otherwise you might've created subdirectories below the mount point, but not inside the dataset. The relevant properties are keystatus, mountpoint and mounted.
 
Do you mean ?

What were the exact commands you used to create the directories? Was zpool/edata mounted at that time? Otherwise you might've created subdirectories below the mount point, but not inside the dataset. The relevant properties are keystatus, mountpoint and mounted.
mkdir Documents. Correct command?
You make me wondering that I might not loaded the key of this dataset before creating the folder. I will test by removing them and create them again after making sure the key is loaded.
 
mkdir Documents. Correct command?
You make me wondering that I might not loaded the key of this dataset before creating the folder. I will test by removing them and create them again after making sure the key is loaded.

You also need to make sure that the dataset is mounted. Otherwise you are not modifying the contents of the dataset when you create folders below (what's supposed to be) the mount point.
 
Thanks Fabian. Just rebooted and checked the properties: keystatus (unavailable), mountpoint (/zpool/edata), mounted (no). So after reboot I issued the following commands in order to issue the mount and to load the keys:
1) Load keys: zfs load-key zpool/edata
2) Mount: zfs mount zpool/edata

After this the properties are filled correctly. I assume I can just stored data (files/ folders) in this dataset at this point?
Is there a possibility to put these commands in a script which I can run after boot?
 
Last edited:
Thanks Fabian. Just rebooted and checked the properties: keystatus (unavailable), mountpoint (/zpool/edata), mounted (no). So after reboot I issued the following commands in order to issue the mount and to load the keys:
1) Load keys: zfs load-key zpool/edata
2) Mount: zfs mount zpool/edata

After this the properties are filled correctly. I assume I can just stored data (files/ folders) in this dataset at this point?

Yes.

Is there a possibility to put these commands in a script which I can run after boot?

Here is an example of how to do it at boot. If you use this approach, it should also get mounted by the zfs-mount.service afterwards (test it out to make sure).

Otherwise you can just put the two commands in a shell script and run that whenever you need it.
 
  • Like
Reactions: kwinz
Here is an example of how to do it at boot. If you use this approach, it should also get mounted by the zfs-mount.service afterwards (test it out to make sure).
Sorry if I am hijacking this thread, but I want to achieve just that (unlocking ZFS while booting). I tested the method you linked to in a vm, but it is not working. First error was that zfs is located at "/usr/sbin/" and not "/usr/bin" . That was easy to fix, but now I am receiving a cryptic error. journalctl shows the following:

Code:
Feb 24 20:41:17 pve systemd[1]: Starting Load encryption keys...
-- Subject: A start job for unit zfs-load-key.service has begun execution
-- Defined-By: systemd
-- Support: https://www.debian.org/support
--
-- A start job for unit zfs-load-key.service has begun execution.
--
-- The job identifier is 1647.
Feb 24 20:41:17 pve bash[4354]: Key load error: encryption failure
Feb 24 20:41:17 pve bash[4354]: 0 / 1 key(s) successfully loaded
Feb 24 20:41:17 pve systemd[1]: zfs-load-key.service: Main process exited, code=exited, status=255/EXCEPTION
-- Subject: Unit process exited
-- Defined-By: systemd
-- Support: https://www.debian.org/support
--
-- An ExecStart= process belonging to unit zfs-load-key.service has exited.
--
-- The process' exit code is 'exited' and its exit status is 255.
Feb 24 20:41:17 pve systemd[1]: zfs-load-key.service: Failed with result 'exit-code'.
-- Subject: Unit failed
-- Defined-By: systemd
-- Support: https://www.debian.org/support
--
-- The unit zfs-load-key.service has entered the 'failed' state with result 'exit-code'.
Feb 24 20:41:17 pve systemd[1]: Failed to start Load encryption keys.
-- Subject: A start job for unit zfs-load-key.service has failed
-- Defined-By: systemd
-- Support: https://www.debian.org/support
--
-- A start job for unit zfs-load-key.service has finished with a failure.
--
-- The job identifier is 1647 and the job result is failed.

Loading the key from terminal after booting works fine however. Any idea what might cause this error?
 
Sorry if I am hijacking this thread, but I want to achieve just that (unlocking ZFS while booting). I tested the method you linked to in a vm, but it is not working. First error was that zfs is located at "/usr/sbin/" and not "/usr/bin" . That was easy to fix, but now I am receiving a cryptic error. journalctl shows the following:

Code:
Feb 24 20:41:17 pve systemd[1]: Starting Load encryption keys...
-- Subject: A start job for unit zfs-load-key.service has begun execution
-- Defined-By: systemd
-- Support: https://www.debian.org/support
--
-- A start job for unit zfs-load-key.service has begun execution.
--
-- The job identifier is 1647.
Feb 24 20:41:17 pve bash[4354]: Key load error: encryption failure
Feb 24 20:41:17 pve bash[4354]: 0 / 1 key(s) successfully loaded
Feb 24 20:41:17 pve systemd[1]: zfs-load-key.service: Main process exited, code=exited, status=255/EXCEPTION
-- Subject: Unit process exited
-- Defined-By: systemd
-- Support: https://www.debian.org/support
--
-- An ExecStart= process belonging to unit zfs-load-key.service has exited.
--
-- The process' exit code is 'exited' and its exit status is 255.
Feb 24 20:41:17 pve systemd[1]: zfs-load-key.service: Failed with result 'exit-code'.
-- Subject: Unit failed
-- Defined-By: systemd
-- Support: https://www.debian.org/support
--
-- The unit zfs-load-key.service has entered the 'failed' state with result 'exit-code'.
Feb 24 20:41:17 pve systemd[1]: Failed to start Load encryption keys.
-- Subject: A start job for unit zfs-load-key.service has failed
-- Defined-By: systemd
-- Support: https://www.debian.org/support
--
-- A start job for unit zfs-load-key.service has finished with a failure.
--
-- The job identifier is 1647 and the job result is failed.

Loading the key from terminal after booting works fine however. Any idea what might cause this error?

Hi,
I tested it and get the same behavior. The problem is that you'd need to use systemd-ask-password or an interactive terminal. Using the other method mentioned in the Arch wiki seems to work and if you don't have too many encryption roots it's not a lot of effort.

There is also the ZFS mount generator to create the desired systemd services automatically. See man 8 zfs-mount-generator and follow the instructions there (especially the example).
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!