[TUTORIAL] Automatically unlocking encrypted PVE server over LAN

Dunuin

Distinguished Member
Jun 30, 2020
14,358
4,213
243
Germany
Edit: for a solution see this post or maybe some later post of this thread as post can't be edited anymore after 30 days.

Hi,

Not directly a PVE question, but maybe someone got an idea how to accomplish this.

I've got multiple non-clustered PVE servers that use full system encryption using ZFS native encryption. One of them is running 24/7 and the other ones are only started when needed (damn electricity prices...). To unlock the root partitions after boot I connect via SSH to dropbear-initramfs. Dropbear on the server is automatically running "/usr/bin/zfsunlock" that will ask for the passphrase to unlock the pool and then terminate the SSH session. So only thing that is possible to do is typing in that password.
This works totally fine when manually unlocking the servers using putty where I then can copy-paste my passphrase stored in a keepass safe.
But I don't find a solution on how to automatically unlock these PVE servers. For example every sunday at 00:00 all of these servers have to be only for backup and maintaince tasks (PVEs backing each other up to virtualized PBSs, ZFS scrubs, PBS prune/GC/verify, especially the weekly ZFS snapshots are important because without these running every sunday 00:00 the ZFS replication will fail once booted again and all those TBs of data need to be replicated again as incremental replication will then fail with missing snapshots) and I need to be at home to manually unlock them or boot and unlock them up before leaving my home when I know I won't be back before 00:00. Then the servers are running all the day without being used wasting money.

What I would like to do is to create a script on the 24/7 running PVE server that will unlock these other servers for me. Wouldn't be a problem that the passphrase is stored on the server, as the server itself is fully encrypted, so the stored passphrase isn't accessible once that server gets shut down (= stolen).

But I just can't find a way how I can open a SSH session from a bash script and then auto-type-in my passphrase. Looks like piping stdin from a file to the ssh command isn't working.

Anyone got an idea how I could solve this?
 
Last edited:
i think this is the wrong forum for such a question...u need bash support. i can advice you the stackoverflow forum.
 
I dont know if zfsunlock has a feature to do that, but on the shell you can accomplish this with "expect"

regards
Peter
As far as I understand the zfsunlock will run this to ask for the password and unlock the dataset with the answer:
Code:
systemd-ask-password "Encrypted ZFS password for ${zfs_fs_name}:" | /sbin/zfs load-key "$zfs_fs_name"

I got this finally working with expect. In case someone else faces this same problem (or my future me that forgot how to do that and finds this thread via google...like so often...) here is how to do it:

My PVE nodes are using an encrypted "rpool/ROOT/pve-1" dataset that I unlock using zfsunlock with the dropbear-initramfs listening on port 10022 using a RSA key pair. The "master node" is unlocking the "target node" with IP 192.168.43.50.

1.) You need the expect program that doesn't come with PVE by default, so we need to install its package first to the master node:
apt update && apt install expect

2.) We need a RSA key pair for the SSH session. If you got one you can use it. I prefer to create a new one just for unlocking those PVE nodes. To create such a key pair, on the master node, you can run this:
ssh-keygen -t rsa -b 4096 -C "unlock PVE nodes using SSH"
I store my key to "/root/.keys/unlock_id_rsa" and don't use a password. this will then also create a "/root/.keys/unlock_id_rsa.pub" containing the public key.

3.) To be able to connect to the target node using the private key stored in "/root/.keys/unlock_id_rsa" we need to add the public key to the dropbear-initramfs of the target node first. For that log in to your target node and then copy the content of your public key file "/root/.keys/unlock_id_rsa.pub" into /etc/dropbear-initramfs/authorized_keys. Make sure to add it as a single line. Then rebuild your initramfs by running update-initramfs -u -k all.

4.) To be able to automatically unlock the root dataset of the target node we need to store the datasets passphrase on the master node. For that I create a new file "/root/.keys/rpool_root.pwd" and add my passphrase there as a single line. Then make it only accessible by root:
Code:
chmod 700 /root/.keys/rpool_root.pwd
chown root:root /root/.keys/rpool_root.pwd
Also keep in mind that everyone with access to this file will be able to unlock your target node. So make sure to store it on an encrypted filesystem.

5.) To be able to start the ssh session and unlock the pool we need to make use of a expect script. I store my script at "/root/scripts/unlock_zfs_node.expect" on the master node.
Here is the script:
Code:
#!/usr/bin/expect

# Will open a SSH shell using rsa key pair and then type in the ZFS root dataset passprase stored in a file.
# You will need the expect package that doesn't come with PVE, so you need to install it first:
# apt update && apt install expect
#
# Usage: /path/to/unlock_pve_node.expect <host> <ssh user> <sshport> <sshprivkeyfile> <zfspass>
# Example: /root/scripts/unlock_pve_node.expect "192.168.43.50" "root" "10022" "/root/.keys/unlock_id_rsa" "/root/.keys/rpool_root.pwd"
#
# v1.0 from 2023.04.04 16:25

# reads content of a file and returns it
proc slurp {file} {
    set fh [open $file r]
    set ret [read $fh]
    close $fh
    return $ret
}

# increase the timeout a bit in case the host hasn't finished booting yet
set timeout 180

# open SSH session using private key
spawn ssh -p [lindex $argv 2] -i [lindex $argv 3] -o ConnectTimeout=180 -o StrictHostKeyChecking=no [lindex $argv 1]@[lindex $argv 0]

# unlock the root dataset
expect "*?Encrypted ZFS password for*?" {
    send "[slurp [lindex $argv 4]]\r"
}

interact
Make the script execuable:
Code:
chown root:root /root/scripts/unlock_zfs_node.expect
chmod 750 /root/scripts/unlock_zfs_node.expect

6.) You should now be able to unlock the target node by running this expect script as root on your master node with arguments like this:
/path/to/unlock_pve_node.expect <host> <ssh user> <sshport> <sshprivkeyfile> <zfspass>
Example:
/root/scripts/unlock_pve_node.expect "192.168.43.50" "root" "10022" "/root/.keys/unlock_id_rsa" "/root/.keys/rpool_root.pwd"
You should then see something like this:
Code:
root@j3710:~/.keys# /root/scripts/unlock_pve_node.expect "192.168.43.50" "root" "10022" "/root/.keys/unlock_id_rsa" "/root/.keys/rpool_root.pwd"
spawn ssh -p 10022 -i /root/.keys/unlock_id_rsa -o ConnectTimeout=180 -o StrictHostKeyChecking=no root@192.168.43.50

Unlocking encrypted ZFS filesystems...
Enter the password or press Ctrl-C to exit.

 Encrypted ZFS password for rpool/ROOT: *****************************************************
Password for rpool/ROOT accepted.
Unlocking complete.  Resuming boot sequence...
Please reconnect in a while.
Connection to 192.168.43.50 closed.
 
Last edited:
hi, how do I add this to the cron ?

/root/scripts/unlock_pve_node.expect "192.168.43.50" "root" "10022" "/root/.keys/unlock_id_rsa" "/root/.keys/rpool_root.pwd"
 
Adding */3 * * * * root /root/scripts/unlock_pve_node.expect "192.168.43.50" "root" "10022" "/root/.keys/unlock_id_rsa" "/root/.keys/rpool_root.pwd" > /dev/null 2>&1 to /etc/crontab and doing a systemctl restart cron to try to unlock it every 3 minutes?
 
  • Like
Reactions: grinmlenx
And if you run the expect script manually via shell it works?
Did you reboot the node after editing the crontab so cron will use the edited crontab?

You could remove the "> /dev/null 2>&1" to not supress output/error messages.
 
Last edited:
Yes it works terminal . cron host rebooted


spawn ssh -p 10022 -i /root/.keys/unlock_id_rsa -o ConnectTimeout=180 -o StrictHostKeyChecking=no root@10.10.10.44

Unlocking encrypted ZFS filesystems...
Enter the password or press Ctrl-C to exit.

Encrypted ZFS password for rpool/ROOT: ****************
Password for rpool/ROOT accepted.
Unlocking complete. Resuming boot sequence...
Please reconnect in a while.
Connection to 10.10.10.44 closed.


remove "> /dev/null 2>&1" unfortunately it doesn't work



made a crutch )) VM win11 + putty + Automatic Mouse and Keyboard
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!