[SOLVED] Can't set an symlink in /etc/pve for ZFS encryption

fireon

Distinguished Member
Oct 25, 2010
4,466
464
153
Austria/Graz
deepdoc.at
Hello all,

i would like to add an link in /etc/pve/private/. But it does not let me do that. For example:
Code:
root@virtu01 /etc/pve/priv/storage # ln -s /root/bla .
ln: die symbolische Verknüpfung './bla' konnte nicht angelegt werden: Die angeforderte Funktion ist nicht implementiert
But i see some links under /etc/pve, so what i have to do that this working for me?

Very thanks
 
Ok, i understand. Is it possible to move files from /etc/pve/priv/storage to a zfs encrypted dataset and change the path for storage authinfos? Because if PVE and PBS at the same serverroom and both devices are stolen, is it easy to access also the pbs encrypted backup.
 
The backing sqlite DB and relate files from pmxcfs are located in /var/lib/pve-cluster/, so if that one is encrypted the data at rest is as secure as other sensitive files on that system. /etc/pve is just a FUSE mount that stores data solely in the backing DB after all.
 
No i have tested 2 things. First i've created an new encrypted dataset under /rpool/pmxcfs and created also an symlink.

Code:
/var/lib/pve-cluster -> /rpool/pmxcfs/pve-cluster/

After reboot an give the password for the dataset, i entered this command:
Code:
systemctl start pve-cluster.service pve-firewall.service pve-guests.service pve-ha-crm.service pve-ha-lrm.service pvesr.service pvestatd.service

The pve-guests.service failed to start:
Code:
Sep 30 00:39:58 pvetest systemd[1]: Starting PVE guests...
Sep 30 00:39:59 pvetest pvesh[1495]: ipcc_send_rec[1] failed: Connection refused
Sep 30 00:39:59 pvetest pvesh[1495]: ipcc_send_rec[2] failed: Connection refused
Sep 30 00:39:59 pvetest pvesh[1495]: ipcc_send_rec[3] failed: Connection refused
Sep 30 00:39:59 pvetest pvesh[1495]: Unable to load access control list: Connection refused
Sep 30 00:39:59 pvetest systemd[1]: pve-guests.service: Main process exited, code=exited, status=111/n/a
Sep 30 00:39:59 pvetest systemd[1]: pve-guests.service: Failed with result 'exit-code'.
Sep 30 00:39:59 pvetest systemd[1]: Failed to start PVE guests.

i get the same error if i create the dataset directly on /var/lib/pve-cluster:
Code:
NAME                           USED  AVAIL     REFER  MOUNTPOINT
rpool                         2.30G  27.7G      104K  /rpool
rpool/ROOT                    1.95G  27.7G       96K  /rpool/ROOT
rpool/ROOT/pve-1              1.95G  27.7G     1.95G  /
rpool/data                     363M  27.7G       96K  /rpool/data
rpool/data/subvol-100-disk-0   363M  7.65G      363M  /rpool/data/subvol-100-disk-0
rpool/pve-cluster              240K  27.7G      240K  /var/lib/pve-cluster
I also set an dependency for the mountpoint in the pve-cluster.service:

EDITOR=nano systemctl edit --full pve-cluster

The third and the fourth line.

Code:
[Unit]
Description=The Proxmox VE cluster filesystem
ConditionFileIsExecutable=/usr/bin/pmxcfs
After=var-lib-pve\\x2dcluster.mount
RequiresMountsFor=/var/lib/pve-cluster
Wants=corosync.service
Wants=rrdcached.service
Before=corosync.service
Before=cron.service
After=network.target
After=sys-fs-fuse-connections.mount
After=time-sync.target
After=rrdcached.service
DefaultDependencies=no
Before=shutdown.target
Conflicts=shutdown.target

[Service]
ExecStart=/usr/bin/pmxcfs
KillMode=mixed
Restart=on-failure
TimeoutStopSec=10
Type=forking
PIDFile=/run/pve-cluster.pid

[Install]
WantedBy=multi-user.target
If i do this the server start normaly, but ignores the encrypted dataset complelty. The only way is to stop pve-cluster.service, remove the whole /var/lib/pve-cluster, mount the dataset and start pve-cluster.service again.
 
Last edited:
The pve-guests.service failed to start:
That one is only for later, rather intersting: what's the status of pve-cluster.service?

I also set an dependency for the mountpoint in the pve-cluster.service:
Hmm, but is there a var-lib-pve\x2dcluster.mount unit that depends on the "password ask unit"?

And/or how does the volume get unlocked, do you have a separate unit for that? Something like the Arch Linux wiki proposes?
https://wiki.archlinux.org/title/ZFS#Unlock_at_boot_time:_systemd


You effectively need to encode the following dependencies:
  • pve-cluster.service: add After=var-lib-pve\x2dcluster.mount and Requires=var-lib-pve\x2dcluster.mount
  • var-lib-pve\x2dcluster.mount (or whatever dataset your using): add After/Require for the service that loads your keys
  • key service/unit: Add After=zfs-import.target and Before=zfs-mount.service, you probably can narrow that down so that only the /var/lib/pve-cluster mount is affected if you encrypt only that and do not want to delay boot for the rest. So you could drop the Before on zfs-mount.service and call zfs mount -a again in an ExecStartPost= of the key unlock service
EDITOR=nano systemctl edit --full pve-cluster

A bit off-topic but I'd rather drop the full, that way you have separate "modification" and original snippets, --full copies the whole thing and you may run into more trouble if we have to update the original unit, IIRC.

So for the existing PVE-managed units rather do EDITOR=nano systemctl edit pve-cluster and add:
Code:
[Unit]
After=var-lib-pve\x2dcluster.mount
Requires=var-lib-pve\x2dcluster.mount
 
Hello @t.lamprecht and very thanks for your reply. The dataset will be mounted manualy with a passphrase. The problem that i can't set the dependency for the mountpoint, because the Systemdservice var-lib-pve\x2dcluster.mount did'nt exist after reboot. This service is generated only after the encrypted dataset is mounted.

So if i create the dataset for the clusterfs every service is starting successfully, but with the wrong data's because no dataset is mounted. Unfortunately the dependencies do not help, because the mountservice does not exist yet. Really stupid.

Before mount:
Code:
systemctl status "var-lib-pve\x2dcluster.mount"
Unit var-lib-pve\x2dcluster.mount could not be found.
After mount:
Code:
systemctl status "var-lib-pve\x2dcluster.mount"
● var-lib-pve\x2dcluster.mount
     Loaded: loaded
     Active: active (mounted) since Sun 2021-10-10 01:50:30 CEST; 2s ago
      Where: /var/lib/pve-cluster
       What: rpool/pve-cluster

That is why there are no errors when booting the system. Everything starts normally. Only with the wrong cluster file system.
Code:
NAME                           USED  AVAIL     REFER  MOUNTPOINT
rpool                         2.31G  27.7G      104K  /rpool
rpool/ROOT                    1.95G  27.7G       96K  /rpool/ROOT
rpool/ROOT/pve-1              1.95G  27.7G     1.95G  /
rpool/data                     363M  27.7G       96K  /rpool/data
rpool/data/subvol-100-disk-0   363M  7.65G      363M  /rpool/data/subvol-100-disk-0
rpool/pve-cluster              240K  27.7G      240K  /var/lib/pve-cluster
If I don't mount the dataset directly into the correct path, but put a symlink, at least nothing wrong is started. Then, of course, there is also an error message. Which is normal.
 
If i map this with an symlink (so pve-cluster can't start) i get this error:

Code:
systemctl status pve-cluster.service
● pve-cluster.service - The Proxmox VE cluster filesystem
     Loaded: loaded (/etc/systemd/system/pve-cluster.service; enabled; vendor preset: enabled)
     Active: failed (Result: exit-code) since Tue 2021-10-12 23:09:38 CEST; 345ms ago
    Process: 3327 ExecStart=/usr/bin/pmxcfs (code=exited, status=255/EXCEPTION)
        CPU: 11ms

Oct 12 23:09:38 pvetest systemd[1]: pve-cluster.service: Scheduled restart job, restart counter is at 5.
Oct 12 23:09:38 pvetest systemd[1]: Stopped The Proxmox VE cluster filesystem.
Oct 12 23:09:38 pvetest systemd[1]: pve-cluster.service: Start request repeated too quickly.
Oct 12 23:09:38 pvetest systemd[1]: pve-cluster.service: Failed with result 'exit-code'.
Oct 12 23:09:38 pvetest systemd[1]: Failed to start The Proxmox VE cluster filesystem.
In this case the encrypted filesystem is no mounted yet. So this services are failed to start:
Code:
systemctl --failed
  UNIT                 LOAD   ACTIVE SUB    DESCRIPTION
● pve-cluster.service  loaded failed failed The Proxmox VE cluster filesystem
● pve-firewall.service loaded failed failed Proxmox VE firewall
● pve-guests.service   loaded failed failed PVE guests
● pve-ha-crm.service   loaded failed failed PVE Cluster HA Resource Manager Daemon
● pve-ha-lrm.service   loaded failed failed PVE Local HA Resource Manager Daemon
● pvesr.service        loaded failed failed Proxmox VE replication runner
● pvestatd.service     loaded failed failed PVE Status Daemon
CONCLUSION: It is not easy to move /var/lib/pve-cluster to an zfs encrypted filesystem. The only other (not so good) solution is to create an encrypted dataset for the backup too, and to not use the encryption of PBS. I do not yet know what disadvantages this entails.
 
I have now solved it differently. Simply the PBS ZFS dataset additionally encrypted.
 
I know this thread is old, but for anyone on the Internet who was looking for this like me - you can do it with bind mount instead of a symlink. I have around 14TB of backups so recreating the PBS volume wasn't suitable, if you have an empty PBS volume definitely encrypt it before doing any backups and avoid this hacky process. But if you are in my situation this works:

Please remember to backup those keys before doing this! This could be potentially catastrophic!
Do not do this if you have a PVE cluster! This was tested only on single node system!


Code:
cp -rp /etc/pve/priv/storage/* /data/data-encrypted/backup-keys/image-backups/
rm -rf /etc/pve/priv/storage/*
mount -o rw,bind /data/data-encrypted/backup-keys/image-backups /etc/pve/priv/storage

Of course you have to delete the keys from /etc/pve/priv/storage before bind mounting them from the encrypted ZFS volume otherwise it's a bit pointless.

Here is my startup script for anyone interested, my VMs are tagged with tag "encrypted" so the script can find them and start them up automatically after the volume is unlocked:

Code:
#!/bin/bash
ENCRYPTED_VOL="data/data-encrypted"
ENCRYPTED_PVE_POOL="local-zfs-encrypted"
ENCRYPTED_PVE_KEYS_SRC="/data/data-encrypted/backup-keys/image-backups"
ENCRYPTED_PVE_KEYS_DST="/etc/pve/priv/storage"
ENCRYPTED_VMIDS=$(pvesh get /nodes/$(hostname -s)/qemu --output-format=json | jq -r '.[] | select (.tags != null) | select(.tags|contains("encrypted")).vmid')

die()
{
   echo "[ERROR] $@" >&2
   exit 1
}

pvesm set $ENCRYPTED_PVE_POOL --disable 1

zfs mount -l $ENCRYPTED_VOL
[[ $? -ne 0 ]] && die "Failed to unlock ZFS vol: $ENCRYPTED_VOL try again"
echo "[INFO] Successfully unlocked ZFS vol: $ENCRYPTED_VOL"

while ! zfs mount -a
do
    echo "[INFO] Waiting for ZFS volume $ENCRYPTED_VOL to become online"
    sleep 10
done

mount -o rw,bind $ENCRYPTED_PVE_KEYS_SRC $ENCRYPTED_PVE_KEYS_DST
[[ $? -ne 0 ]] && die "Failed to bind mount the PVE storage keys: $ENCRYPTED_PVE_KEYS_SRC on $ENCRYPTED_PVE_KEYS_DST"
echo "[INFO] Successfully bind mounted the PVE storage keys: $ENCRYPTED_PVE_KEYS_DST"

pvesm set $ENCRYPTED_PVE_POOL --disable 0

while ! pvesm status -storage $ENCRYPTED_PVE_POOL 2>&1 | grep "$ENCRYPTED_PVE_POOL" | grep -qw 'active'
do
    echo "[INFO] Waiting for PVE Pool $ENCRYPTED_PVE_POOL to become online"
    sleep 10
done

for VM in $ENCRYPTED_VMIDS
do
    echo "[INFO] Starting VM: $VM"
    qm start $VM
    [[ $? -ne 0 ]] && die "Failed to start VM: $VM"
done

exit 0

Enjoy!
 
Last edited:
I know this thread is old, but for anyone on the Internet who was looking for this like me - you can do it with bind mount instead of a symlink. I have around 14TB of backups so recreating the PBS volume wasn't suitable, if you have an empty PBS volume definitely encrypt it before doing any backups and avoid this hacky process. But if you are in my situation this works:

Please remember to backup those keys before doing this! This could be potentially catastrophic!
Do not do this if you have a PVE cluster! This was tested only on single node system!

that's not guaranteed to work at all (our code doesn't always read things from /etc/pve via the file system abstraction!), and is also quite dangerous since you break the assumption of PVE that there's a consistent view of its content across the cluster. please don't do such hacks - if you want the data stored in /etc/pve to be encrypted at rest, encrypt the backing disk.
 
  • Like
Reactions: clickbg
that's not guaranteed to work at all (our code doesn't always read things from /etc/pve via the file system abstraction!), and is also quite dangerous since you break the assumption of PVE that there's a consistent view of its content across the cluster. please don't do such hacks - if you want the data stored in /etc/pve to be encrypted at rest, encrypt the backing disk.
Yeah after some thought about it I figured that the pve-cluster would be very confused with another fs bind mounted onto /etc/pve. I ended up using dm-crypt for my backup destination since encrypting /var/lib/pve-cluster would mean I will lose the ability to have encrypted and unencrypted VMs on the same system. Btw it would be very useful if we could specify the path to the encryption key in /etc/pve/storage.cfg - that way one can place it outside /etc/pve and still avoid causing the PVE processes going mad with some hacky solution like mine. It would also provide more security for environments where that is important (STIG, SOX) - right now if you get a hold of a PVE host you can steal the backups keys and decrypt the backups, rendering the encryption useless. If we could place the backup keys onto an encrypted volume we can control how and when those keys are avaliable for access - one can even schedule them to be mounted/unmounted only during backups and restores or even store them into a vault and control the access using timed tokens - the vault is open only during the backup window. Sort of end to end encryption solution. Of course that would be very inconvenient as a default, but the option to do it would be great. Of course the best option would be to only store the public key on the PVE host and keep the priviate key off system so you can use it only during restores. That would avoid those gymnastics all together and provide much better security model, but I don't know how easy would be to implement that - my guess is that it would be hard.

Thanks @fabian for the clarification!
 
Last edited:
Yeah after some thought about it I figured that the pve-cluster would be very confused with another fs bind mounted onto /etc/pve. I ended up using dm-crypt for my backup destination since encrypting /var/lib/pve-cluster would mean I will lose the ability to have encrypted and unencrypted VMs on the same system. Btw it would be very useful if we could specify the path to the encryption key in /etc/pve/storage.cfg - that way one can place it outside /etc/pve and still avoid causing the PVE processes going mad with some hacky solution like mine. It would also provide more security for environments where that is important (STIG, SOX) - right now if you get a hold of a PVE host you can steal the backups keys and decrypt the backups, rendering the encryption useless. If we could place the backup keys onto an encrypted volume we can control how and when those keys are avaliable for access - one can even schedule them to be mounted/unmounted only during backups and restores or even store them into a vault and control the access using timed tokens - the vault is open only during the backup window. Sort of end to end encryption solution. Of course that would be very inconvenient as a default, but the option to do it would be great. Of course the best option would be to only store the public key on the PVE host and keep the priviate key off system so you can use it only during restores. That would avoid those gymnastics all together and provide much better security model, but I don't know how easy would be to implement that - my guess is that it would be hard.

you can already do something like that with a vzdump hookscript (remove/restore the backup key so it's only "there" during the backup). we intentionally don't want to support storage locations outside of /etc/pve, because then we lose the property of having them always consistent across the cluster - e.g., if you forgot to update the key on one host in your cluster, it won't be able to access backups made by the other nodes and vice-versa. if you take it out of service before realizing, you might have destroyed your only copy of the key and lost the backups.

sensitive files (like keys) are root-only anyway - if you are root on the hypervisor, you can already replace any of the software on it with malicious copies anyway (and thus access all backups, steal passwords when entered, ..).

technically we could of course create a new (ephemeral) AES key for each backup and store that encrypted with the "master" key (which is RSA at the moment, so you'd only need the public key part on the PVE side). there are three downsides that mean we won't implement that: 1.) understanding key handling gets a lot more complicated for the user 2.) restores require handing the private part of the master key to the PVE side 3.) you basically lose all deduplication
 
  • Like
Reactions: clickbg

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!