[TUTORIAL] Adding Full Disk Encryption to Proxmox

I'm just doing this again on another server and after the preparing to copy stage I noticed that when doing 'df -h' it shows a few missing GB in several places.

Code:
Filesystem                 Size  Used Avail Use% Mounted on
tmpfs                      3.2G  2.4M  3.2G   1% /run
efivarfs                   320K   74K  242K  24% /sys/firmware/efi/efivars
/dev/sda1                   29G  5.8G   23G  21% /cdrom
/cow                        16G  561M   16G   4% /
tmpfs                       16G  8.0K   16G   1% /dev/shm
tmpfs                      5.0M  8.0K  5.0M   1% /run/lock
tmpfs                       16G     0   16G   0% /tmp
tmpfs                      3.2G  168K  3.2G   1% /run/user/1000
tmpfs                      3.2G   84K  3.2G   1% /run/user/0
/dev/mapper/pve--new-root   30G   28K   28G   1% /mnt/new
/dev/nvme0n1p2             672M   28K  623M   1% /mnt/new/boot
/dev/nvme0n1p1             197M   512  197M   1% /mnt/new/boot/efi
/dev/mapper/pve-root        59G   16G   40G  28% /mnt/old
/dev/nvme1n1p2            1022M   12M 1011M   2% /mnt/old/boot/efi

As you can see, on /dev/mapper/pve--new-root/ there's only 28K used but only 28G is available out of 30G, so there's 2GB unaccounted for there. On /dev/nvme0n1p2 it's 49MB, and on /dev/mapper/pve-root it's 3GB.
 
I found the solution to the problem with dropbear not working.

If dropbear.conf has

DROPBEAR_OPTIONS="-s -c cryptroot-unlock"

it prompts for the password but then gives a timeout error without sending any input to cryptsetup, and when I close the session it then sends something, but it's obviously not what I've typed because cryptsetup gives a "wrong password" error.

If I remove "-c cryptroot-unlock" from dropbear.conf, so it doesn't run that command automatically, and update-initramfs, I can type cryptroot-unlock manually after I connect and then when I enter the password it works fine.
 
Just used this brilliant guide again to set up my second proxmox server this way and I just wanted to say thank you again.
Such a well-written and easy to follow guide. Much appreciated and I wish you a fantastic life!

P.S.: If you are unsure of your network interface name (mine likes to change between reboots), simply don't specify it.
Use the following format in /etc/default/grub: ip=<ipaddr>::<gatewayaddr>:<subnetmask>:<hostname> - That works like a charm for me!
 
Last edited:
  • Like
Reactions: waltar
How does editing /etc/crypttab and /etc/fstab allow for the root partition to be decrypted when booting, when those files are on the encrypted root partition?

Does 'update-initramfs -c -k all' copy them to the unencrypted boot partition, and if so, where are they located?
 
How can this thread be 1 year old and still no support from the default installer? This should be a top priority as many hosts enterprise or community will be in locations where you don't trust the people who have physical access to the hardware, eg. datacenters. As for typing passwords either a separate SSH server with a separate port on a bridge and of course the good old IPMI access.
 
Last edited:
How can this thread be 1 year old and still no support from the default installer? This should be a top priority as many hosts enterprise or community will be in locations where you don't trust the people who have physical access to the hardware, eg. datacenters. As for typing passwords either a separate SSH server with a separate port on a bridge and of course the good old IPMI access.
Maybe the actually paying subscribers have other priorities?
 
where you don't trust the people who have physical access to the hardware
Not sure FDE is actually going to prevent that. Once your system is up & running - its going to be "unlocked" - so with physical access to the running server - I imagine shenanigans are still possible.
 
Not sure FDE is actually going to prevent that. Once your system is up & running - its going to be "unlocked" - so with physical access to the running server - I imagine shenanigans are still possible.
Usually you can bypass to shell through grub but not if the boot drive is encrypted, you just wouldn't have access to the filesystem.

Once Proxmox is running all you have is a login prompt you'd have to reboot the hypervisor which would immediately alert the sysadmin if they're monitoring and then be stuck at the prompt to type a password to boot.

ZFS does support native encryption, instead of using debian and lvms you'd still take full advantage of ZFS features, no bitrot, L1/L2 ARC etc.

XCP-NG also doesn't allow full encryption using their installer, no one should be expected to quit to a shell to manually make something work on a standard production deployment, ever, it should be natively supported.

TrueNAS scale also has excellent support for ZFS once the system is booted, much better than Proxmox being primarily a NAS but it's terrible at being a Hypervisor it certainly is no match for Proxmox you can't even setup a Q35 type VM there without having to go to a shell and manually do it, no GUI support = no support.

Proxmox also has excellent CEPH support, so I can see some deployments might also make sense to network boot from SAN/CEPH while retaining the same level of encryption at boot, RAM and BOOT should be fully encrypted so the server is not susceptible to any breaches, at least not easily (unless you can break AES-256 encryption or other vulnerabilities).
 
Like violating government regulations and exposing customer data to untrusted personnel?
Somebody is stealing disks out of the data center? Or whole machines? Because once the server is up and running the disk is decrypted. FDE does not protect against inside jobs by data center employees who have access to the hypervisor.

It does protect against your homelab machine being stolen. But your homelab is not what Proxmox makes their money on.

ETA: While it doesn't support ZFS, you can configure Debian with FDE from the installer and then put PVE on top.
 
Last edited:
Somebody is stealing disks out of the data center? Or whole machines? Because once the server is up and running the disk is decrypted. FDE does not protect against inside jobs by data center employees who have access to the hypervisor.

It does protect against your homelab machine being stolen. But your homelab is not what Proxmox makes their money on.

ETA: While it doesn't support ZFS, you can configure Debian with FDE from the installer and then put PVE on top.
This would be FDE + RAM encryption (hardware support required ofc)
 
This would be FDE + RAM encryption (hardware support required ofc)
With all of that - there are still (nefarious) devices & ways capable of reading what is running on a system / what's in ram / what's on the disks - while it is up & unlocked. One only needs to record the key used to unlock the disk (with a nefarious device installed at boot-up), datacenter technician power-down machine & clones complete disk. By the time you contact the datacenter - he'll have it up & running again.

As a rule of thumb - if you don't trust a datacenter with your data - don't use the datacenter.
 
With all of that - there are still (nefarious) devices & ways capable of reading what is running on a system / what's in ram / what's on the disks - while it is up & unlocked. One only needs to record the key used to unlock the disk (with a nefarious device installed at boot-up), datacenter technician power-down machine & clones complete disk. By the time you contact the datacenter - he'll have it up & running again.

As a rule of thumb - if you don't trust a datacenter with your data - don't use the datacenter.
While that is true IPMI keylogging etc, if you were to implement a secure shell to remotely unlock the machine then after everything has been setup you update all the keys/passwords to 'flush' the old keys it would be pretty solid still i'd be amazed to see my data in my hands unless it's a CIA/NSA op and a lot of compute power to decrypt data all that while not alerting me that anything is out of the usual.

One more thing, people shouldn't trust datacenters or use that as an excuse to not implement full encryption, with that mindset why not just leave the machine without any password and more importantly all those encrypted VPN tunnels bridged with the hypervisor are now exposed because they can access, reconfigure send that to a usb nic or any unused port and now they have access to your tunnel (not that mine would have anything unencrypted going on there but still worth mentioning).
 
Last edited:
I had dropbear and mandos working with my hardware encrypted NVME, and the console prompted for the password so I could enter it locally as well, but something's gone fubar and it doesn't work anymore and says there's I/O errors before dropping to busybox. I can run 'cryptsetup open' from there to unlock the drive, and when I exit it continues to boot normally. smartctl and dd tests don't show any errors on the drive, so I'm a bit stumped.

I created a thread here with the logs.

Can anyone think of a fix? Is it possible that the boot software has got corrupted? Is there an easy way to reset it to default to take dropbear and mandos out of the equation for now, to see if the normal unlock password prompt comes back, and then re-add dropbear and mandos?

EDIT: Not sure what the problem was, but after updating PVE to the latest version it's all working again, with no I/O errors.
 
Last edited:
@dmpm I've been working on a script to setup native ZFS on p2 of each redundant/not drive and use LUKS only for /boot it's at around 1000 lines of code already it's highly complex and i don't think a step by step is gonna work but if the official support is not there we have to make it work ourselves it is free and open source after all.

I am using cryptsetup for /boot without any issues it makes an md0 and i'm using systemd for bootloader which works with ZFS properly as / will be ZFS.
 
  • Like
Reactions: dmpm
@dmpm I've been working on a script to setup native ZFS on p2 of each redundant/not drive and use LUKS only for /boot it's at around 1000 lines of code already it's highly complex and i don't think a step by step is gonna work but if the official support is not there we have to make it work ourselves it is free and open source after all.

I am using cryptsetup for /boot without any issues it makes an md0 and i'm using systemd for bootloader which works with ZFS properly as / will be ZFS.
I decided not to use ZFS on my boot/root drive and just use it on my data HDD as that's the only drive where I might benefit from compression and dedup. For my Proxmox containers, I'm using PBS on a separate machine to back those up and PBS does dedup itself so I don't need ZFS for that.

I bought a hardware encrypted NVME to avoid the CPU overhead with LUKS software encryption, but I would have used software LUKS rather than ZFS encryption because I've read about some issues with the latter.
 
I decided not to use ZFS on my boot/root drive and just use it on my data HDD as that's the only drive where I might benefit from compression and dedup. For my Proxmox containers, I'm using PBS on a separate machine to back those up and PBS does dedup itself so I don't need ZFS for that.

I bought a hardware encrypted NVME to avoid the CPU overhead with LUKS software encryption, but I would have used software LUKS rather than ZFS encryption because I've read about some issues with the latter.
CPU does have hardware acceleration for encryption AES-NI etc not sure if it can handle the data throughput for nvme drives (especially in raid) so far i've only used these for VPNs, the problem is drives can and will fail and technologies change over time which is why i prefer to stick to software vs hardware so you're not tied to a specific generation of hardware, though if the cpu just can't handle there's nothing you can do so i understand your approach it is quite hard to keep performance as expected while retaining FDE but i'm also a CEPH user so i know what it's like to get 1.5GB/s out of 3x NVMe which would yield a good 15GB/s on Z1 I would also love to give up on CEPH (for small deployments) and rely on SAN more as i can possibly get 3.6GB/s aggregate out of 3x10gbps.

ZFS also struggles with dedup I haven't used that yet but obviously if you're installing non encrypted vms over and over can in theory save up a lot of data, i actually had better results over 10 years ago with Windows Server 2012 and ReFS than with ZFS for this specific case.

I would like to use my TrueNAS machine for PBS might just be able to run a container there or share the storage over the SAN in a VM, i don't want to 'reserve' storage for PBS right now might just build a machine for that later with an LTO 8/9 drive that can export large backups to tape.

One more thing I forgot to mention, you can use a GPU to offload encryption to it, something like a p620 etc, haven't managed to get that to work yet but if you have plenty of pcie slots/lanes to work with i think that's a pretty reasonable solution to that.
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!