Full system encryption with network unlock?

Joa

New Member
Apr 6, 2023
5
0
1
I am considering to move from Hyper-V to Proxmox but want to have my systems fully encrypted with network unlock capability. With Hyper-V I am using Bitlocker & Bitlocker Network Unlock. I am aware of Clevis & Tang as a network unlock solution in the Linux domain, but haven´t seen any documentation, tutorial or other information on how to leverage that with Proxmox.
Is this supposed to work and if yes, how does one install or configure it? Anyone already succeeded doing so?
Best Regards,
Joachim
 
You can run a full system encrypted PVE using ZFS or LUKS encryption and then unlock it using SSH with the dropbear-initramfs package.
Works totally fine here. I started writing a tutorial but haven't finished that yet.

Here are some hints on how to set this up with ZFS native encryption: https://forum.proxmox.com/threads/encrypting-proxmox-ve-best-methods.88191/#post-387731
And here is how to unlock it automatically using SSH: https://forum.proxmox.com/threads/a...rypted-pve-server-over-lan.125067/post-546466

But not great for a cluster as migration of guests between nodes won't work with encrypted ZFS: https://bugzilla.proxmox.com/show_bug.cgi?id=2350
 
Last edited:
  • Like
Reactions: jhamann
your answer raises more questions than it answers:
  • any reason to prefer ZFS native encryption over LUKS? (never thought about that so far)
  • why use scripting (I assume you use scripting/cron to provide the passphrase via SSH) when there is an existing client/server solution targeted to solve the problem?
 
any reason to prefer ZFS native encryption over LUKS? (never thought about that so far)
Most stuff like scrubing, resilvering, replication and so on will still work while the datasets/zvols are locked, as only the data will be encrypted but not the ZFS metadata. You can decide which datasets and zvols should be encrypted and which not. as not the whole partition is encrypted like LUKS would do.
why use scripting (I assume you use scripting/cron to provide the passphrase via SSH) when there is an existing client/server solution targeted to solve the problem?
I only have seen examples of how to unlock LUKS with clevis. Don't know if that will work with ZFS native encryption. Dropbear-initramfs + zfsunlock atleast is the official way how this is ecribed in the ZFS documentation: https://github.com/openzfs/zfs/tree/master/contrib/initramfs
 
As metadata can be sensitive as well, I´d definitely prefer LUKS encryption.
Essentially I can narrow down to has anyone tried any of the examples/tutorials on clevis with proxmox and succeeded in doing so? Any side-effects?
 
Essentially I can narrow down to has anyone tried any of the examples/tutorials on clevis with proxmox and succeeded in doing so? Any side-effects?
Probably not. Looks like 99,9% of the people here don't encrypt their PVE nodes.
As the PVE installer doesn't support any encryption, you would need to install a LUKS encrypted Debian 11 using the Debian installer and then install the PVE packages on top: https://pve.proxmox.com/wiki/Install_Proxmox_VE_on_Debian_11_Bullseye
But this limits you to not officially supported mdraid or btrfs for software raid or HW raid, as the Debian installer doesn't come with ZFS support.
 
Last edited:
any reason why close to nobody here considers encryption as a standard security mechanism? I am encrypting all my systems for more than a decade.
 
Probably too hacky for most people, as that is neither integrated nor officially supported.
 
any reason why close to nobody here considers encryption as a standard security mechanism? I am encrypting all my systems for more than a decade.
The forum is for asking question and getting help. User with no question or with commercial agreements are not posting or reading your posts.
 
I admit I am still on Windows because full system encryption is too hacky with Linux. I´d definitely love to see distros showing up that enable full system encryption with clevis and tang out of the box.

Tom,
are you implying I should buy a commercial agreement in order to find out it is unsupported?
 
I've been using the LUKS+(dropbear) SSH in initramfs variant for quite a while, both with stock Debian and PVE, it works fine but it requires a bit of manual setup (and thus, sysadmin knowledge). If you manage a bigger fleet of servers that all have this requirement and want to delegate the unlocking automatically via the network, I think clevis is your best option (there is also mandos, but from what I have heard from people that had to deploy both, clevis is the lesser of two evils ;)). Of course, plain old IPMI is also an option to manually unlock (and strongly recommended if you don't have local access, else recovering if something breaks will be rather hard!)
 
I have just been playing around with this.

I have Promox running in ZFS raid 1. Once installed I then encrypt both disks with LUKS using this guide: https://herold.space/proxmox-zfs-full-disk-encryption-with-ssh-remote-unlock/
Note that the guide works for Proxmox 7 but I cant get it to work for Proxmox 8. You can however install 7, do the encryption and then upgrade to 8. Reading around it seems there might be an issue with the newer kernel which it why its not working in 8.

I unlock using a passphrase in the command line which I access via IPMI. I did also get dropbear SSH working but given my motherboard has IPMI I tend to just use that.

Tonight I installed a Tang server (https://docs.oracle.com/en/operatin...InstallandConfigureaTangServer.html#nbde-tang) on a raspberry pi and then installed Clevis (https://docs.oracle.com/en/operatin...ecryptionWithClevis.html#nbde-clevis-luksdump) in Proxmox. The initial test of creating and encrypting a file worked. I then moved into the LUKS bit but on reboot its not unlocking the drives. Its just presenting me with the normal passphrase prompt.

My brain is fried now though so will maybe investigate some other time.

I do find the lack of encryption in the Proxmox installer annoying. Doing it manually is ugly. People will argue that servers will be in locked rooms that only sys admins can reach and therefore disk encryption is not required. I feel that argument is getting a bit old fashioned now though. If you can then why not add that extra layer of security by encrypting. But if you are going to have servers encrypted you ideally want an automated way of unencrypting them. So in summary I support the cause @Joa and I'm 95% of the way there.
 
Got it working :D

Follow the guide I posted above for Tang (https://docs.oracle.com/en/operatin...InstallandConfigureaTangServer.html#nbde-tang)

For Clevis I needed to change what was installed.

Install with this:
sudo apt install -y clevis clevis-luks clevis-initramfs

Disk1 sda3:
sudo clevis luks bind -d /dev/sda3 tang '{"url": "http://172.20.0.16:80"}'

Disk2 sdb3:
sudo clevis luks bind -d /dev/sdb3 tang '{"url": "http://172.20.0.16:80"}'

On my proxmox setup with LUKS encryption its the 3rd partition thats encrypted hence sda3 and sdb3.
Change 172.20.0.16:80 to match the IP and port you are running your Tang server on.

On reboot it contacts Tang and then decrypts. If I turn the Tang server off it fails to encrypt and then you are left with the normal passphrase entery point.

This is real nice. My only reservation is stability. With it all being a bit hacky I feel something could easily break at some point and then you are in a bad place.
 
  • Like
Reactions: Dunuin
I've hit a problem with this. My testing above was done by virtualising Proxmox just so I could easily clone and restore the VM when I broke things.

I have now tried this on my physical Proxmox server which has two network interfaces. When it boots it now complains about having multiple NICs and does not auto decrypt the disks:
1697274664060.png

I found this post (https://github.com/latchset/clevis/issues/243) which details some steps for fixing this by adding a line to grub. Problem I have found is that Proxmox seems to be using something different as found in this post (https://forum.proxmox.com/threads/grub-not-updating-with-update-grub.68699/). I tried the method in that post but still not working yet.

:(
 
I've tried adding ip=:::::eno1:dhcp: or ip=172.20.0.143::172.20.0.129:255.255.255.192::eno1:off nameserver=172.20.0.129 to /etc/kernel/cmdline and then running pve-efiboot-tool refresh but it does not work.

172.20.0.143 = Proxmox IP for eno1
172.20.0.129 = Gateway
255.255.255.192 = Network mask
eno1 = Network adaptor
172.20.0.129 = DNS server (also router)
 
Last edited:
I've also tried adding the lines to /etc/default/grub and then running update-grub and also /etc/kernel/postinst.d/zz-update-grub. Still no luck

I think my system is using grub rather than systemd because I get the blue grub boot loader screen at boot.
 
I've made a little progress - managed to get rid of the warning message about multiple network interfaces by adding this GRUB_CMDLINE_LINUX="ip=:::::eno1:dhcp" to sudo vi /etc/default/grub and then updating with pve-efiboot-tool refresh

Its still not auto decrypting though annoyingly:
1697373451693.png
Note the above is on a different machine so different IP address is different to my earlier post (172.20.140 rather than 172.20.0.143)

I also tried setting a static IP with GRUB_CMDLINE_LINUX="ip=172.20.0.140::172.20.0.129:255.255.255.192::eno1:off nameserver=172.20.0.129"[ but that also does not work.
 
Just got it working but I dont know what I changed o_O

I think I need to install from scratch to find out how I did it. Imagine a desk full of random papers. Thats what my mind feels like right now only with config files. Yes I did spend all weekend doing this :eek:
 
Ok it seems all the stuff about the multiple NIC warning at boot is a false alarm. Yes I did manage to clear the warning by editing Grub but I've undone these changes and it still works.

What the fix was actually was binding LUKS using the Tang servers IP address rather than hostname. When I tested using my virtual machine I was using the IP address of the Tang server whereas on my physical Promox machines I was using the Tang servers hostname. It must be that early in the boot process the machine is unable to resolve hostnames.

So in summary the fix is easy.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!