Proxmox Encryption Configuration Question

Whitterquick

Active Member
Aug 1, 2020
246
9
38
Hey all, this may be a silly or obvious question but I’m fairly new around here (relatively speaking) so here goes…

I am looking to have my HDDs encrypted so that all data cannot be easily accessed if the drives are pulled out of my server.

I have the following configuration:

|Server
|-HDD1 (Proxmox)
|-HDD2 (VMs)
|-HDD3 (Data available to VMs)

If I encrypt the 3x HDDs (LUKS/EXT4) in full, and unlock the first to access Proxmox, followed by the other two once logged into Proxmox, would all VMs then be able to boot up without needing an individual unlock (seems like an obvious YES)? Would the extra Data drive (HDD3) also be accessible to VMs via passthrough without needing unlocking (this I’m not so sure about)? I’m thinking once the initial unlock has taken place Proxmox will be able to “see” and use everything on the drives. Correct?

My current setup has only HDD1 encrypted, with each VM on HDD2 setup with individual encryption (meaning a lot of individual unlocking).

|Server
|-HDD1 (Proxmox (encrypted))
|-HDD2 (VMs)
|—VM1 (encrypted)
|—VM2 (encrypted)
|—VM3 (encrypted)
…etc
|-HDD3

Which is the better method? Or should I ask what is the best way to encrypt everything with minimum fuss/issues? (Best practice) :)
 
Hey all, this may be a silly or obvious question but I’m fairly new around here (relatively speaking) so here goes…

I am looking to have my HDDs encrypted so that all data cannot be easily accessed if the drives are pulled out of my server.

I have the following configuration:

|Server
|-HDD1 (Proxmox)
|-HDD2 (VMs)
|-HDD3 (Data available to VMs)

If I encrypt the 3x HDDs (LUKS/EXT4) in full, and unlock the first to access Proxmox, followed by the other two once logged into Proxmox, would all VMs then be able to boot up without needing an individual unlock (seems like an obvious YES)?
Yes, if you store the passwords for HDD2+3 on the unlocked HDD1 and create a systemd config file that unlocks and mounts them at boot.
Would the extra Data drive (HDD3) also be accessible to VMs via passthrough without needing unlocking (this I’m not so sure about)? I’m thinking once the initial unlock has taken place Proxmox will be able to “see” and use everything on the drives. Correct?
You need to unlock them automatically while proxmox is booting or you won'T be able to automatically start the VMs after proxmox has finished loading.
My current setup has only HDD1 encrypted, with each VM on HDD2 setup with individual encryption (meaning a lot of individual unlocking).

|Server
|-HDD1 (Proxmox (encrypted))
|-HDD2 (VMs)
|—VM1 (encrypted)
|—VM2 (encrypted)
|—VM3 (encrypted)
…etc
|-HDD3

Which is the better method? Or should I ask what is the best way to encrypt everything with minimum fuss/issues? (Best practice) :)
I got my proxmox disks encrypted using LUKS that I unlock using initramfs-dropbear while booting. On that unlocked drives I store my passphrase for the other disks (for VMs and so on). I use ZFS native encryption but LUKS should work too. I disabled autostart of VMs and created a systemd service that runs a script after proxmox is finished and ready to go. That script then unlocks all the other drives and starts the VMs/LXCs in the order I want.

Encrypting individual VMs only makes sense if you want them to be more secure when not using them.
 
Yes, if you store the passwords for HDD2+3 on the unlocked HDD1 and create a systemd config file that unlocks and mounts them at boot.

You need to unlock them automatically while proxmox is booting or you won'T be able to automatically start the VMs after proxmox has finished loading.

I got my proxmox disks encrypted using LUKS that I unlock using initramfs-dropbear while booting. On that unlocked drives I store my passphrase for the other disks (for VMs and so on). I use ZFS native encryption but LUKS should work too. I disabled autostart of VMs and created a systemd service that runs a script after proxmox is finished and ready to go. That script then unlocks all the other drives and starts the VMs/LXCs in the order I want.

Encrypting individual VMs only makes sense if you want them to be more secure when not using them.
Anywhere I can find a guide on how to do that script unlock process?
 
I scripted it like this:

cat /etc/systemd/system/pve_post_start.service
Code:
[Unit]
Description=PVE Start Script
After=pve-guests.service

[Service]
Type=oneshot
ExecStart=/bin/bash /root/scripts/pve_post_start.sh
User=root

[Install]
WantedBy=multi-user.target
That service basically waits for PVE to enter the last boot step (so all storages and network should be ready) and then run the "pve_post_start.sh" script once.

cat /root/scripts/pve_post_start.sh
Code:
#!/bin/bash

echo "Starting up..."

# unlock encrypted datasets
bash /root/scripts/unlock_zfs.sh
sleep 10

# mount SMB shares listed in fstab
mount -a
sleep 20

######### Priority 1 #########

# start OPNsense VM to make DMZ/Intranet routing work
qm start 102
sleep 30

# start PiHole VM to make DynDNS/DNS work
qm start 116
sleep 20

######### Priority 2 #########

# start Zabbix VM to be able to monitor systems
qm start 105
sleep 20

# start GraylogLXC to be able to collect logs
pct start 121
sleep 30

# start Nextcloud VM to be able to use cloud
qm start 110
sleep 30

# ...

echo "Startup finished"
exit 0
This script run the "unlock_zfs.sh" script that unlocks the other encrypted storages, runs "mount -a" so every storage defined in fstab should get mounted (needed to do that because somehow my SMB shares don't get automounted at boot) and then starts several VMs and LXCs.

cat /root/scripts/unlock_zfs.sh
Code:
#!/bin/bash

# first entry of DATASETS corresponds to first entry of PWDFILES, 2nd to second and so on
# use multiple pairs like this to unlock several datasets:
# DATASETS=( "pool/dataset1" "pool/dataset2" )
# PWDFILES=( "/path/to/dataset1passphrase.file" "/path/to/datset2passphrase.file" )
# passphrase file should only contain the passphrase in one line
DATASETS=( "pool/encrypteddataset" )
PWDFILES=( "/path/to/file/containing/passphrase.pwd" )

unlockDataset () {
        local dataset=$1
        local pwdfile=$2
        # check if dataset exists
        type=$(zfs get -H -o value type ${dataset})
        if [ "$type" == "filesystem" ]; then
                # check if dataset isn't already unlocked
                keystatus=$(zfs get -H -o value keystatus ${dataset})
                if [ "$keystatus" == "unavailable" ]; then
                        zfs load-key ${dataset} < ${pwdfile}
                        # check if dataset is now unlocked
                        keystatus=$(zfs get -H -o value keystatus ${dataset})
                        if [ "$keystatus" != "available" ]; then
                                echo "Error: Unlocking dataset '${dataset}' failed"
                                return 1
                        fi
                else
                        echo "Info: Dataset already unlocked"
                        return 0
                fi
        else
                echo "Error: No valid dataset found"
                return 1
        fi
}

unlockAll () {
        local noerror=0
        # check if number of datasets and pwdfiles are equal
        if [ ${#DATASETS[@]} -eq ${#PWDFILES[@]} ]; then
                # loop though each dataset pwdfile pair
                for (( i=0; i<${#DATASETS[@]}; i++ )); do
                        unlockDataset "${DATASETS[$i]}" "${PWDFILES[$i]}"
                        if [ $? -ne 0 ]; then
                                noerror=1
                                break
                        fi
                done
        else
                echo "Error: Wrong number of datasets/pwdfiles"
                noerror=1
        fi
        # mount all datasets
        if [ $noerror -eq 0 ]; then
                zfs mount -a
        fi
        return $noerror
}

unlockAll
This scripts goes through all all encrypted datasets defined in DATASETS, unlocks them with the passphrase stored in the file PWDFILES if they are still locked and runs a "zfs mount -a" at the end so all now unlocked datasets get mounted.
 
Last edited:
What would be the best way to encrypt the drives with LUKS? Would a live distro be ok or would it be best done from the command line? I don’t think there is any option in Proxmox to do this other than from the command line, but would there be any difference from a live distro GUI?

Also, I seem to remember the drive used for storing VMs cannot be formatted beforehand or Proxmox won’t see it as a free available drive. Is that correct?
 
Best way would be to partition, format and mount the drives by yourself using the CLI. That can all be done directly on the proxmox host but you need to do it yourself.
 
  • Like
Reactions: Whitterquick
Best way would be to partition, format and mount the drives by yourself using the CLI. That can all be done directly on the proxmox host but you need to do it yourself.
That is what I was afraid of. The command line is something I am trying to get better with but I am always more comfortable with GUI!
 
Best way would be to partition, format and mount the drives by yourself using the CLI. That can all be done directly on the proxmox host but you need to do it yourself.
Coming back to this, what format does the drive storing VMs need to be? When I unlock the drive via Shell and then check it via GUI (Disks > LVM), it shows usage 100% and Free 0 B :oops:
 
Yesterday I created a LUKS encrypted LVM thin storage. Looked like this:

Code:
# Find out UUID of SSD:
ls -l /dev/disk/by-id/

# Create Partition:
fdisk /dev/disk/by-id/myUUID
g
n
Enter
Enter
Enter
w

# LUKS encrypt partition:
cryptsetup luksFormat -c aes-xts-plain64 -s 512 -h sha512 /dev/disk/by-id/myUUID-part1

# Unlock partition:
cryptsetup luksOpen --allow-discards /dev/disk/by-id/myUUID-part1 lukslvm

# Create LVM:
pvcreate /dev/mapper/lukslvm
vgcreate vgluks /dev/mapper/lukslvm

# Create LVM thin:
lvcreate -l99%FREE -n lvthin vgluks
lvconvert --type thin-pool vgluks/lvthin

# Manually add it to PVE as LVM thin storage using WebUI with name "LVMthin".

Now you should be able to use that uncrypted LVM thin as a Storage for VMs/LXCs. But you need to manually unlock the LUKS each time you reboot.

I edited my ZFS unlock stript to work with LUKS but that will only make sense if you got a save place to store the passphrase file:

unlock_lvmthin.sh
Code:
#!/bin/bash

DEVS=( "/dev/disk/by-id/myUUID-part1" )
NAMES=( "lukslvm" )
PWDFILES=( "/path/to/passphrasefile.pwd" )

unlockDev () {
    local dev=$1
    local name=$2
    local pwdfile=$3
  
    # check if device exists
    type=$(cryptsetup isLuks -v ${dev})
    if [ "$type" == "Command successful." ]; then
        # check if LUKS container isn't already unlocked
        keystatus=$(cryptsetup status /dev/mapper/${name} | head -1)
        if [ "$keystatus" == "/dev/mapper/${name} is inactive." ]; then
            cryptsetup luksOpen --allow-discards ${dev} ${name} < ${pwdfile}
            # check if LUKS container is now unlocked
            keystatus=$(cryptsetup status /dev/mapper/${name} | head -1)
            if [ "$keystatus" == "/dev/mapper/${name} is active." ] || [ "$keystatus" == "/dev/mapper/${name} is active and is in use." ]; then
                echo "Info: LUKS container '${name}' unlocked"
                return 0
            else
                echo "Error: Unlocking LUKS container '${name}' failed"
                return 1
            fi
        else
            echo "Info: LUKS container '${name}' already unlocked"
            return 0
        fi
    else
        echo "Error: No valid LUKS container found"
        return 1
    fi
}

unlockAll () {
    local noerror=0
    # check if number of devs, names and pwdfiles are equal
    if [ ${#DEVS[@]} -eq ${#NAMES[@]} ] && [ ${#NAMES[@]} -eq ${#PWDFILES[@]} ]; then
        # loop though each dev name pwdfile triplet
        for (( i=0; i<${#DEVS[@]}; i++ )); do
            unlockDev "${DEVS[$i]}" "${NAMES[$i]}" "${PWDFILES[$i]}"
            if [ $? -ne 0 ]; then
                noerror=1
                break
            fi
        done
    else
        echo "Error: Wrong number of devs/names/pwdfiles"
        noerror=1
    fi
    return $noerror
}

unlockAll

Maybe there are better ways like using the keyring when using initramfs-dropbear to unlock the root disks LUKS over SSH or just crypttab with keyfiles.
 
Last edited:
  • Like
Reactions: Whitterquick
# Create LVM thin:
lvcreate -l99%FREE -n lvthin vgluks
lvconvert --type thin-pool vgluks/lvthin

# Manually add it to PVE as LVM thin storage using WebUI with name "LVMthin".

Now you should be able to use that uncrypted LVM thin as a Storage for VMs/LXCs. But you need to manually unlock the LUKS each time you reboot.
This was the bit I needed for now. The lvconvert command I think… I done the rest correct but I can’t use it.
I will worry about the automatic unlocking once I have the basics setup the way I want them :) thanks!
 
Any reason why 99% and not 100% ?
I tried it with 100% first and then the convert failed because the VG got no used space left. Then I tried it with 99,9 and 99.9 but it looks like the command can only handle whole numbers. So I tried it with 99% and that worked.

And I don't really know how the LVM gets mounted. I just run the cryptsetup luksOpen to unlock the partition and then PVE is automatically doing some magic and the lvmthin is mounted and working.

So if you just want to encrypt the LVMthin VM storage it is really easy to unlock it. Just login using SSH, run that cryptsetup luksOpen command, type in your passphrase and thats all.
But it gets compmicated if you want a full system encryption. Here you should google for "dropbear-initramfs". With that your server will boot of a unencrypted boot partition into initramfs which will run a droobear SSH instance in RAM. Then you login into that temporary RAM OS using SSH, unlock the LUKS encrypted root partition and boot into proxmox.
Not sure how to do it but LUKS supports keyrings so maybe it is possible to store that dropbear-initramfs passphrase in the keyring so you dont need to type it in later to unlock the VM storage.
 
Last edited:
  • Like
Reactions: Whitterquick
I tried it with 100% first and then the convert failed because the VG got no used space left. Then I tried it with 99,9 and 99.9 but it looks like the command can only handle whole numbers. So I tried it with 99% and that worked.

And I don't really know how the LVM gets mounted. I just run the cryptsetup luksOpen to unlock the partition and then PVE is automatically doing some magic and the lvmthin is mounted and working.
I had my partitions all setup correctly and I used 100% it’s just when going into the GUI after unlocking it was already used and can’t select it for lvm. It does show up it’s just not usable so I will try that convert command and see what happens.
 
The vlconvert will just convert the normal LV into a LVthin so it can be used with thin provisioning to store other LVs. But if you want to use thin provisioning you also need to unlock the LUKS container using "--allow-discards" or discard won't work and without discard thin provisioning isn't working. But keep in mind that using discard will weaken your encryption. So it might be more save to just use a normal LVM without LVthin but in that case you problaby will waste alot of capacity.
 
The vlconvert will just convert the normal LV into a LVthin so it can be used with thin provisioning to store other LVs. But if you want to use thin provisioning you also need to unlock the LUKS container using "--allow-discards" or discard won't work and without discard thin provisioning isn't working. But keep in mind that using discard will weaken your encryption. So it might be more save to just use a normal LVM without LVthin but in that case you problaby will waste alot of capacity.
I think the issue is it’s not showing as “open”.

Why do you say using LVthin weakens encryption?
 
Why do you say using LVthin weakens encryption?

If you want good encryption you don'T want that people can tell what is data and what not. So best case would be that 100% of your storage is always used and filled with random data. But SSDs got a really crappy performance if they are full all the time and they got problems doing wear leveling and can't use SLC caching.
With discard only real data is encrypted and empty space is empty. So you know where the data is, how big the data is and so on and that makes it easier to crack your encryption.

Here a simpified example:
Code:
"This is clear text" <- unencrypted data
"eieoihegsbgdsfsiuf" <- encrypted data without dicard where you can'T see what's data and what's empty
"awdd ad fgesf egsg" <- encrypted data where discard deleted empty space...so you know "ad" is a word with 2 letters...not that hard to guess that it might be "is", "at" or "to"...
 
Last edited:
  • Like
Reactions: Whitterquick
Still not able to add the volume as VM Storage. I am seeing “No Disks unused” in the GUI so I can’t add it.
 
How do you try to add it?

Its "Datacenter -> Storage -> Add -> LVM-Thin" for the commands above. If the LUKS container is unlocked with cryptsetup luksOpen --allow-discards /dev/disk/by-id/myUUID-part1 lukslvm (you can run cryptsetup status /dev/mapper/lukslvm to verify that it is unlocked) there should be "vgluks" as volume group and "lvthin" as Thin Pool selectable.
 
  • Like
Reactions: Whitterquick
How do you try to add it?

Its "Datacenter -> Storage -> Add -> LVM-Thin" for the commands above. If the LUKS container is unlocked with cryptsetup luksOpen --allow-discards /dev/disk/by-id/myUUID-part1 lukslvm (you can run cryptsetup status /dev/mapper/lukslvm to verify that it is unlocked) there should be "vgluks" as volume group and "lvthin" as Thin Pool selectable.
Oh yes, this works! Thanks so much I was tearing my hair out!
Never added a volume through Datacenter before.
 
If you want good encryption you don'T want that people can tell what is data and what not. So best case would be that 100% of your storage is always used and filled with random data. But SSDs got a really crappy performance if they are full all the time and they got problems doing wear leveling and can't use SLC caching.
With discard only real data is encrypted and empty space is empty. So you know where the data is, how big the data is and so on and that makes it easier to crack your encryption.

Here a simpified example:
Code:
"This is clear text" <- unencrypted data
"eieoihegsbgdsfsiuf" <- encrypted data without dicard where you can'T see what's data and what's empty
"awdd ad fgesf egsg" <- encrypted data where discard deleted empty space...so you know "ad" is a word with 2 letters...not that hard to guess that it might be "is", "at" or "to"...
Could we not use lvthin without the allow discards flag? Wouldn’t that be the best of both worlds?

Is LVM-Thin the default now? All the VMs I have created in the past have been on LVM storage and I like that it allocates the full disk size at the start. I see that it does not support snapshots though, and I presume that’s the same if using PBS?

For the example you show above, I prefer to have the data more securely encrypted…
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!