I really tried to find an existing thread, but everything I find is either related to the main/boot pool, or adding an encrypted dataset only, so I figured I write down my steps for my own documentation and maybe it'll help someone else out too.
This is for when you already have a PVE set up and simply want to add new disks and storage. In my case, I am adding four SSDs which will be used in a ZFS striped mirror (RAID10) for VM and LXC storage. I'll be using a keyfile stored in /root to auto-unlock the dataset on boot. Obviously you want the location of your keyfile to be secure. "tub" is the name of the pool in my case (yes, it's a bad word play).
1. Find the unique IDs of the disks you want to use. I personally prefer this over /dev/sdX because I'm not sure how/if ZFS would handle these changing in the future.
2. Create your new keyfile and set perms (Note: My /root/ is encrypted. Make sure your keyfile is in a secure location and you have a backup.)
3. Create the new pool (using RAID10 here, adjust if needed):
(See zpoolprops and zfsprops for details on -o and -O options respectively)
4. Create dataset within the pool for our VM storage
(datasets inherit zfsprops (-O ...) from the pool unless specified otherwise, so since we specified encryption at pool-level, no need to do it again at dataset-level)
5. Quick sanity checks
6. Have a systemd service unlock the pool on boot for us
Enable at boot:
7. Add storage to PVE to use for VM and LXC storage - Using the GUI for this
Navigate to
Give it a name ("ID"), select
Give it a couple seconds, and the storage should appear in the left sidebar ready for use.
This is for when you already have a PVE set up and simply want to add new disks and storage. In my case, I am adding four SSDs which will be used in a ZFS striped mirror (RAID10) for VM and LXC storage. I'll be using a keyfile stored in /root to auto-unlock the dataset on boot. Obviously you want the location of your keyfile to be secure. "tub" is the name of the pool in my case (yes, it's a bad word play).
1. Find the unique IDs of the disks you want to use. I personally prefer this over /dev/sdX because I'm not sure how/if ZFS would handle these changing in the future.
ls -l /dev/disk/by-id/
will get you what you need.2. Create your new keyfile and set perms (Note: My /root/ is encrypted. Make sure your keyfile is in a secure location and you have a backup.)
dd if=/dev/random of=/root/.zfs-tub.key bs=32 count=1 && chmod 0400 /root/.zfs-tub.key
3. Create the new pool (using RAID10 here, adjust if needed):
zpool create -o ashift=12 -O encryption=on -O keylocation=file:///root/.zfs-tub.key -O keyformat=raw -O compression=lz4 tub mirror <diskId1> <diskId2> mirror <diskId3> <diskId4>
(See zpoolprops and zfsprops for details on -o and -O options respectively)
4. Create dataset within the pool for our VM storage
zfs create tub/data
(datasets inherit zfsprops (-O ...) from the pool unless specified otherwise, so since we specified encryption at pool-level, no need to do it again at dataset-level)
5. Quick sanity checks
zpool list -v
(Make sure disks and zvols look right)zfs get all tub/data
(Make sure properties are what you expect. "encryption" should be "aes-256-gcm")6. Have a systemd service unlock the pool on boot for us
nano /etc/systemd/system/zfs-unlock-tub.service
- Put the following content:
Code:
[Unit]
Description=Load tub encryption key
After=zfs-import.target
Before=zfs-mount.service
[Service]
Type=oneshot
ExecStart=/usr/sbin/zfs load-key tub
[Install]
WantedBy=zfs-mount.service
systemctl daemon-reload && systemctl enable zfs-unlock-tub
7. Add storage to PVE to use for VM and LXC storage - Using the GUI for this
Navigate to
Datacenter -> Storage
-> Add -> ZFS
Give it a name ("ID"), select
tub/data
for "ZFS Pool", set "Content" to Disk image, Container
, Enable "Thin provision" if you want to use it. Hit Add
.Give it a couple seconds, and the storage should appear in the left sidebar ready for use.

Last edited: