ZFS replication and ZFS native encryption is problematic. The replication used by PVE won't work with encrypted zvols/datasets so stuff like migration isn't possible without patching some stuff, as this also uses replication. See here:
https://bugzilla.proxmox.com/show_bug.cgi?id=2350
... yada yada yada ...
Unlocking using passphrases:
To auto unlock datasets/zvols using a passphrase stored in a file you can run something like this:
Create script:
nano /root/zfs_unlock_passphrase.sh && chmod 700 /root/zfs_unlock_passphrase.sh && chown root:root /root/zfs_unlock_passphrase.sh
Add there:
Bash:
#!/bin/bash
DATASETS=( "YourPool/SomeDataset" "YourPool/AnotherDataset")
PWDFILES=( "/path/to/file/with/passphrase/of/SomeDataset.pwd" "/path/to/file/with/passphrase/of/AnotherDataset.pwd")
unlockDataset () {
local dataset=$1
local pwdfile=$2
# check if dataset exists
type=$(zfs get -H -o value type ${dataset})
if [ "$type" == "filesystem" ]; then
# check if dataset isn't already unlocked
keystatus=$(zfs get -H -o value keystatus ${dataset})
if [ "$keystatus" == "unavailable" ]; then
zfs load-key ${dataset} < ${pwdfile}
# check if dataset is now unlocked
keystatus=$(zfs get -H -o value keystatus ${dataset})
if [ "$keystatus" != "available" ]; then
echo "Error: Unlocking dataset '${dataset}' failed"
return 1
fi
else
echo "Info: Dataset already unlocked"
return 0
fi
else
echo "Error: No valid dataset found"
return 1
fi
}
unlockAll () {
local noerror=0
# check if number of datasets and pwdfiles are equal
if [ ${#DATASETS[@]} -eq ${#PWDFILES[@]} ]; then
# loop though each dataset pwdfile pair
for (( i=0; i<${#DATASETS[@]}; i++ )); do
unlockDataset "${DATASETS[$i]}" "${PWDFILES[$i]}"
if [ $? -ne 0 ]; then
noerror=1
break
fi
done
else
echo "Error: Wrong number of datasets/pwdfiles"
noerror=1
fi
# mount all datasets
if [ $noerror -eq 0 ]; then
zfs mount -a
fi
return $noerror
}
unlockAll
you can then create a systemd service so it will run the above script:
nano /etc/systemd/system/zfs_unlock_passphrase.service
Add there:
Code:
[Unit]
Description=Unlocks ZFS datasets using passphrases stored in files
After=pve-guests.service
[Service]
Type=oneshot
ExecStart=/bin/bash /root/scripts/zfs_unlock_passphrase.sh
User=root
[Install]
WantedBy=multi-user.target
Enable service:
systemctl enable zfs_unlock_passphrase.service
And you need to create the files that contain your passphrase, edit the script to point to these files and tell it what dataset to unlock it with. The DATASETS and PWDFILES arrays store these. First element of both arrays belongs together and so on.
Both ways worked fine here but with passphrases you might want to add a global start delay so PVE won't try to start the guests before the datasets actually got unlocked. With key files this isn't a problem.
Hi - I've just registered on the Proxmox forum just to say thanks ... this really helped me!
Also, though ... I noticed for your suggested zfs_unlock_passphrase.service you have a different path to zfs_unlock_passphrase.sh (has "/scripts" in there). Only an issue if following verbatim and copying/pasting without actually observing it of course. (I'm presuming there's no convention that validates it's apparent inconsistency with earlier example paths).
In my case I've just built a new desktop system for myself - first "desktop" (rather than mobile-chipset based system) I've had in years! I wanted a decent graphics card for AI stuff ... and that was fine with a cheap eGPU from AliExpress ... but I also discovered how much fun Microsoft Flight Simulator was in VR, but my eGPU and mobile-chipset NUC weren't the best companions for the video card trying to perform that function. Anyways - I wanted a "desktop PC" but I love the flexibility of Proxmox and had passed through the eGPU on my NUCs just fine for "desktop" OS's etc. rather than just RDP'ng them and figured I could do the same on the bigger machine. On my NUCs the TPM and SSD "self-encrypting" capability seemed best/easiest for encryption, and my triple-boot laptop I use LUKs under Debian 11 (with Proxmox installed) ... but for this machine ZFS seemed like the way to go, with fast NVMe's and plenty of RAM and CPU horsepower. I wanted a Windows VM to startup with graphics etc. passed-through after the ROOT dataset was decrypted and after pve got going etc.
Like Astraea I'm not looking for replication or clustering with this Proxmox instance. I love Proxmox for making my home-lab interests easier to organise and manage, and let's me make the most of the hardware I have - but I'm a Dev rather than an IT person and this stuff is just for fun or to run things at home ... and I don't really need a "cluster" running 24/7. I prefer to turn off some machines when I'm not using them. And still be able to manage Proxmox instances.
I never quite know what I'm doing; I spend most of my time in Windows - mostly 'cause of work, and I'm completely new to zfs, and I rather blindly tried following these instructions. I tried adding a couple of seconds delay for the pve instance and a couple of seconds boot delay for the VM I set to start up after pve. But my experience was that ZFS data - dataset that I'd set in the script, didn't appear to mount. I had to run the script in the GUI's shell window before my chosen VM continued starting-up. I knew the script worked etc. but it didn't seem to be happening at the "right time" by itself despite my clumsy delays. I could stop the VM start process, and then if I manually restarted it, the script would run.
I'm sure it's me not understanding lol. I'm quite inept with Linux unfortunately though I hope to redress that.
Maybe the intent of your script was to start after a manual invocation of a VM start. With no consideration of VMs set to automatically start. Or maybe I implemented it incorrectly.
What made my set-up work for me, was to not use "After=pve-guests.service" for the zfs_unlock_passphrase.service, but instead I just used After=pveproxy.service and that seemed to facilitate the behaviour I was looking for. Maybe that's hacky and I overlooked the real reason for my problems ... but anyway - that's what worked for me for my situation.
But it might have taken me a long time to get that far with this desired behaviour, what with my ignorance and time constraints , if it weren't that is, for your script posted here ... so ... thanks again.