[TUTORIAL] Adding Full Disk Encryption to Proxmox

Running Proxmox 8.4 / Debian 12.

SSH login was not working, but the more concerning behavior was...
[booting... ], then prompts some output loading dropbear and after a carriage return, prompts for encryption password on local display, then displays (consistently repeating over and over):
Code:
/scripts/init-premount/dropbear: line 339: sleep: not found
/scripts/init-premount/dropbear: line 149: cat: not found
... repeats over and over again so fast it's challenging to read... until you press ALT-F4. I found a few references to issues with similar setups and this same cat/sleep output via internet search, but did not found a specific solution. It appears these should be provided by busybox which is a dependency to initramfs, so makes little sense. That said, I modified the following:

Update: those cat/sleep output lines dont appear after adding ethernet device drivers to:
Code:
# cat /etc/initramfs-tools/modules
# List of modules that you want to include in your initramfs.
# Examples:
# raid1
# sd_mod
e1000e
r8169
bonding

also modified this line in '/etc/default/grub' (originally referenced vmbr0 below, but realized that probably was not available at that stage of the boot, so changed to the underlying NIC name and REMOVED 'quiet' so now the line looks like this):
Code:
GRUB_CMDLINE_LINUX_DEFAULT="ip=192.168.0.10:::::enp0s25:none iommu=pt nvme_core.default_ps_max_latency_us=0"

It looks like SSH is working at this point and the output lines no longer occur.

Thanks for a great TUTORIAL! I might say bit more verbosity in the last steps might be helpful for folks for avoid the issues (or others) I had above.
 
  • Like
Reactions: waltar
Dropbear and mandos have stopped working for me after I copied my install to a new NVME. I'm using software encryption now after the previous OPAL hw encrypted NVME failed.

When PVE was booting, it showed a message that dropbear was loaded but then it just printed repeated messages saying something about the stack (unfortunately these pre-boot messages aren't logged) before dropping to initramfs, where I had to run 'cryptsetup open /dev/nvme0n1p3 cryptlvm" to decrypt the partition, before typing 'exit' and then the boot continued as normal.

After checking the dropbear conf files and testing that mandos was working by running '/usr/lib/x86_64-linux-gnu/mandos/plugins.d/mandos-client --pubkey=/etc/keys/mandos/pubkey.txt --seckey=/etc/keys/mandos/seckey.txt --connect=10.10.55.20:9601; echo' which returned the password, I ran update-initramfs -u and now when its booting it says that /etc/mandos/plugin-runner.conf" is loaded (that contains '--options-for=mandos-client:--connect=10.10.55.20:9601' so it can find the mandos server) but doesn't say anything about dropbear or show any repeated error messages, but it still eventually drops to initramfs and I have to manually decrypt the partition again.

When looking at dmesg I saw this:
[ 0.079578] Kernel command line: BOOT_IMAGE=/vmlinuz-6.8.12-11-pve root=/dev/mapper/pve--AM-root ro debug libata.allow_tpm=1 intel_iommu=on i915.enable_gvt=1 ip=10.10.55.198::10.10.55.1:255.255.255.0::eno1:none
[ 0.079702] DMAR: IOMMU enabled
[ 0.079761] Unknown kernel command line parameters "BOOT_IMAGE=/vmlinuz-6.8.12-11-pve ip=10.10.55.198::10.10.55.1:255.255.255.0::eno1:none", will be passed to user space.

which seems a bit strange, as I've been using those parameters for ages and I'm pretty sure they're required to be able to passthrough the Intel iGPU to use it with Plex and Frigate. libata.allow_tpm=1 is only needed for OPAL h/w encryption, so I could remove that, but it seems to be saying that most of those parameters are unknown and isn't passing them.

Anyway, maybe that error is a red herring. The important thing is that I'm not seeing a prompt for the password like I used to, and dropbear and mandos aren't working at boot. I've got this in my /etc/crypttab file

cryptlvm UUID=dc66aee7-8664-4283-91c6-7e553b6f07fd none luks,discard,keyscript=decrypt_keyctl

and that is the UUID for nvme0n1p3. Does anything look wrong with that line that could be causing the failure to prompt for the password?
 
OK, I'm not entirely sure how but I've fixed it. There was an extraneous " at the end of the line in dropbear.conf, so removing that got dropbear working. That shouldn't have affected mandos, but after regenerating initramfs with "update-initramfs -u -k all" mandos is working again too.

Apparently initramfs has been made more rigid recently, and now with keyscript=decrypt_keyctl in cryptlvm to cache the passphrase from mandos to automatically decrypt my USB data drive, it will no longer show the on-screen prompt for the passphrase, although you can still enter it locally or use dropbear to enter it.

ChatGPT suggested adding tries=1 after the keyscript parameter, to make it prompt for the passphrase if it can't retreive it from the mandos server, but that didn't work, so it suggested using this script instead of decrypt_keyctl:

#!/bin/sh
exec /usr/lib/mandos/plugin-helpers/mandos-client || /lib/cryptsetup/askpass "Enter passphrase for $CRYPTTAB_SOURCE ($CRYPTTAB_NAME): "

but that didn't cause it to prompt for the passphrase either and entering it locally didn't work anymore, and it stopped dropbear working. So I'll just put up with the missing prompt.
 

How do I add encryption during Proxmox installation?​

This tutorial deals with encryption of an existing installation. If you are starting fresh, my recommendation would be to install Debian with full disk encryption and then add Proxmox to it. This is also an advanced method, but is at least documented officially. You can also just install Proxmox unencrypted and then use this guide. It's a bit cumbersome, but should work.
Has someone tried fresh Debian with full disk encryption and Proxmox VE? I might choose it. Any reason why encryption would mess here anything with the Proxmox? I can’t see why Proxmox would interfere with it. Looks like the best and most simple and non hack method to me.
 
Has someone tried fresh Debian with full disk encryption and Proxmox VE? I might choose it. Any reason why encryption would mess here anything with the Proxmox? I can’t see why Proxmox would interfere with it. Looks like the best and most simple and non hack method to me.
My 1TB Crucial MX500 SSD died the other day (it now reports as a 1GB drive and says it doesn't support OPAL HW encryption!) so I need to reinstall and I thought I'd try installing Proxmox and then doing 'cryptsetup rencrypt' to encrypt my LVM partition.

I tried booting to Xubuntu minimal live but that automounted the LVMs and 'lvchange -an' didn't work to unmount them, so I couldn't do the reencrypt from there. I then booted to Debian Live Rescue mode and that still automounted the LVMs but I was able to unmount them with 'lvchange -an' and then the reencrypt worked. However, you have to reduce the size of the partition to make room for the header using '--reduce-device-size 32m' and that seems to have messed up the LVMs, as when I remounted them with 'lvchange -ay' it reported that they'd changed, and when I tried to boot into Proxmox it just dumped me to a grub prompt.

That doesn't really make any sense, as the boot EFI partition is on /dev/sda2 and I didn't encrypt that, so I think it should at least show the boot menu even if it then fails to boot after that.

Anyway, I guess I'll have to try installing Debian with FDE and then Proxmox on top of it.

EDIT: I was able to fix the size errors with the LVMs after reading this https://superuser.com/questions/1296301/lvm-complaining-about-device-smaller-than-logical-volume

I also realised that I'd made a stupid mistake and didn't have a separate boot partition, because the PVE install doesn't create one, so it's not surprising that it couldn't boot. My EFI partition was 1G so I resized that to 100M and created a new boot partition as /dev/sda4 and rsynced all the files across to that from the /boot folder on the encrypted root LVM, and then I chrooted into the install and edited fstab and crypttab to make sure they were correct, and ran grub-install.real, update-grub and update-initramfs, but it still doesn't boot. In UEFI mode it just says "No operating system installed" and in legacy mode it says "error: disk 'lvmid/XXf0E1-......" not found and dumps me to grub.

I might have another go and start from scratch by reinstalling PVE, creating a boot partition and copying the files across and getting that to boot, and then doing cryptsetup reencrypt on the LVM partition.
 
Last edited:
OK, I've done a fresh install of Proxmox. Looking at /boot/efi/EFI/proxmox/grub.cfg it contains:

search.fs_uuid 5c66d0b6-af05-4c7f-89ca-57e60753528f root lvmid/5JusW2-wBa6-gtWO-vbgB-LVGp-WDSQ-HTqLRw/V0MPoL-iCD0-uzaI-8JJL-vEI7-Ryi7-pvvHSs
set prefix=($root)'/boot/grub'
configfile $prefix/grub.cfg

and blkid shows:

/dev/mapper/pve-root: UUID="5c66d0b6-af05-4c7f-89ca-57e60753528f" BLOCK_SIZE="4096" TYPE="ext4"
/dev/mapper/pve-swap: UUID="ecfd5dc4-a570-4398-9f07-98d33abcdf98" TYPE="swap"
/dev/sda2: UUID="13C8-A6C9" BLOCK_SIZE="512" TYPE="vfat" PARTUUID="94e0e554-f1b5-4e13-b22b-365757fcb162"
/dev/sda3: UUID="4ckU3Q-hToP-9KmX-VhfE-LXqq-SVB2-l18ilD" TYPE="LVM2_member" PARTUUID="037ef473-2d14-4e18-9222-89021aa48ac4"
/dev/sda1: PARTUUID="f666acd9-958c-4c47-b36a-d41f65816d99"

so currently that grub.cfg is referencing the UUID for the pve-root LVM and it also refers to a lvmid (I don't know how one can view the lvmids for the LVMs).

If I create a separate boot partition on /dev/sda4 I'm not sure what I'd need to change in that grub.cfg. I could change the UUID to refer to the /dev/sda4 UUID but the fact that it refers to root (because currently /boot is on the root partition) and the lvmid makes it more complicated and I don't know if I can just edit the first line to say boot instead of root and delete the lvmid bit.
 
root lvmid/5JusW2-wBa6-gtWO-vbgB-LVGp-WDSQ-HTqLRw/V0MPoL-iCD0-uzaI-8JJL-vEI7-Ryi7-pvvHSs
I don't know how one can view the lvmids for the LVMs

It appears the first half (5JusW2-wBa6-gtWO-vbgB-LVGp-WDSQ-HTqLRw) is the VG UUID of the pve VG that contains the LVM which can be found from the vgdisplay command

The second half (V0MPoL-iCD0-uzaI-8JJL-vEI7-Ryi7-pvvHSs) is the actual LV UUID of the root LV which can be found with lvdisplay
 
  • Like
Reactions: dmpm
It appears the first half (5JusW2-wBa6-gtWO-vbgB-LVGp-WDSQ-HTqLRw) is the VG UUID of the pve VG that contains the LVM which can be found from the vgdisplay command

The second half (V0MPoL-iCD0-uzaI-8JJL-vEI7-Ryi7-pvvHSs) is the actual LV UUID of the root LV which can be found with lvdisplay
Thanks, that's useful to know for future reference.

I created my boot partition on /dev/sda4 and rsynced everything across to it from /boot on /dev/sda3. Then I added the new /boot partition to fstab so it looks like this:

UUID=565aabce-3f50-4882-b796-8f3c6f3acbad /boot ext4 defaults 0 2
UUID=13C8-A6C9 /boot/efi vfat defaults 0 1
/dev/pve/root / ext4 errors=remount-ro 0 1
/dev/pve/swap none swap sw 0 0
proc /proc proc defaults 0 0

and I ran update-grub and update-initramfs -c -k all. After rebooting, lsblk confirms that /dev/sda4 is now mounted as /boot but when booting it's still using the boot folder on the pve/root LVM, which I've confirmed by modifying /boot/grub/grub.cfg and seeing that when I reboot it is still using the unmodified grub.cfg on pve/root/boot/grub/grub.cfg.

I thought this might be because of this /boot/efi/EFI/proxmox/grub.cfg file, so I tried modifying that to point to the UUID for my new boot partition on /dev/sda4 and changing the reference from root to boot:

search.fs_uuid 565aabce-3f50-4882-b796-8f3c6f3acbad boot
#search.fs_uuid 5c66d0b6-af05-4c7f-89ca-57e60753528f root lvmid/5JusW2-wBa6-gtWO-vbgB-LVGp-WDSQ-HTqLRw/V0MPoL-iCD0-uzaI-8JJL-vEI7-Ryi7-pvvHSs
set prefix=($boot)'/boot/grub'
configfile $prefix/grub.cfg

but then it fails to boot and drops to GRUB until I change it back. That's if I boot in UEFI mode. If I boot in Legacy mode it still boots because it's not using that file, but then something else is making it using the boot folder on pve/root, because if I rename that to bootold it fails to boot In Legacy mode.

So I need to know what I need to change to make it use /dev/sda4 for boot when booting in Legacy mode, and what to put in the /boot/efi/EFI/proxmox/grub.cfg file to make it use /dev/sda4 for boot when booting in UEFI mode.

Currently my efibootmgr looks like this:

BootCurrent: 0000
Timeout: 1 seconds
BootOrder: 0006,0000,0009,0007,0005,0008
Boot0000* proxmox HD(2,GPT,94e0e554-f1b5-4e13-b22b-365757fcb162,0x800,0x32000)/File(\EFI\PROXMOX\SHIMX64.EFI)
Boot0005 UEFI: USB, Partition 2 PciRoot(0x0)/Pci(0x14,0x0)/USB(17,0)/HD(2,MBR,0xe03c876a,0x9522000,0x10000)..BO
Boot0006* KINGSTON SA400S37240G BBS(HD,,0x0)..BO
Boot0007* USB BBS(HD,,0x0)..BO
Boot0008 IBA CL Slot 00FE v0109 BBS(Network,,0x0)..BO
Boot0009* UEFI OS HD(2,GPT,94e0e554-f1b5-4e13-b22b-365757fcb162,0x800,0x200000)/File(\EFI\BOOT\BOOTX64.EFI)..BO

Boot0006 is the Legacy option, which is the default at the moment. The proxmox and UEFI OS options both point to /dev/sda2 which is the boot/efi partition. I'm not sure what the difference is between the proxmox line that loads SHIMX64.EFI and the UEFI OS line that loads BOOTX64.EFI, but the latter doesn't boot and shows a "Boot restoration" screen before rebooting.
 
OK, I've fixed that problem.

After booting in Legacy mode and with /dev/sda4 mounted as boot I had to run

grub-install --target=i386-pc /dev/sda
update-grub

to make the part installed in the MBR point to /dev/sda4 instead of pve-root, and after booting in UEFI mode I had to run

grub-install \
--target=x86_64-efi \
--efi-directory=/boot/efi \
--bootloader-id=proxmox \
--recheck

/boot/efi/EFI/proxmox/grub.cfg now contains

search.fs_uuid 565aabce-3f50-4882-b796-8f3c6f3acbad root hd0,gpt4
set prefix=($root)'/grub'
configfile $prefix/grub.cfg

so reinstalling grub has updated that to point to /dev/sda4, although it still uses the root tag which is a bit confusing, but so does /boot/grub/grub.cfg, as the entries contain:

set root='hd0,gpt4'
if [ x$feature_platform_search_hint = xy ]; then
search --no-floppy --fs-uuid --set=root --hint-bios=hd0,gpt4 --hint-efi=hd0,gpt4 --hint-baremetal=ahci0,gpt4 565aabce-3f50-4882-b796-8f3c6f3acbad
else
search --no-floppy --fs-uuid --set=root 565aabce-3f50-4882-b796-8f3c6f3acbad

even though further down it then sets root to the correct partition

linux /vmlinuz-6.8.12-12-pve root=/dev/mapper/pve-root ro

grub.cfg still has some lines which refer to the lvm partition

insmod part_gpt
insmod lvm
insmod ext2
set root='lvmid/5JusW2-wBa6-gtWO-vbgB-LVGp-WDSQ-HTqLRw/V0MPoL-iCD0-uzaI-8JJL-vEI7-Ryi7-pvvHSs'
if [ x$feature_platform_search_hint = xy ]; then
search --no-floppy --fs-uuid --set=root --hint='lvmid/5JusW2-wBa6-gtWO-vbgB-LVGp-WDSQ-HTqLRw/V0MPoL-iCD0-uzaI-8JJL-vEI7-Ryi7-pvvHSs' 5c66d0b6-af05-4c7f-89ca-57e60753528f
else
search --no-floppy --fs-uuid --set=root 5c66d0b6-af05-4c7f-89ca-57e60753528f
fi

but they're at the start of the file, under the ### BEGIN /etc/grub.d/00_header ### section, so I guess they're ignored, otherwise I imagine they would be amended when installing or updating grub.

Once I've made a backup image of the whole drive so I can quickly restore it if I screw it up again, I'll try cryptsetup reencrypt on the LVM partition on /dev/sda3 again.