[SOLVED] Recommend way for encryption

kla

New Member
Jun 10, 2024
7
0
1
I am going to install PVE at my office. For the security policy of my office, I have to encrypt all disks.
I have experience of PVE installation at my homelab without encryption, FYI.

At first, I thought I can utilize zfs encryption but I am hesitating after I have read this: https://pve.proxmox.com/wiki/ZFS_on_Linux#zfs_encryption

> Native ZFS encryption in Proxmox VE is experimental. Known limitations and issues include Replication with encrypted datasets [3], as well as checksum errors when using Snapshots or ZVOLs. [4]

Is this still the case?
Should I avoid zfs native encryption for proxmox and I should stick to LUKS with LVM?
What about btrfs then?

Btw, I will also use PBS and it should be encrypted too.
What is the officially or generally recommended way to encrypt PVE/PBS system?
 
Last edited:
There is no official way to encrypt PVE/PBS. You could make use of hardware encryption (enabling SED in case your disks support that) so missing software support isn`t a problem as the encryption is done transparent in hardware on the disks.
Had no problems here so far with ZFS native encryption but it is a pain to set up in case you want to encrypt your root filesystem and to be able to remotely unlock it. And yes, replication won't work. So no option in case you want to run a ZFS-based cluster. For that, you would need to use LUKS and ZFS (or whatever filesystem you want) on top.

Do you need full system encryption for the PBS? PBS backups already use zero-trust and once you enable encrypted backups these will be encrypted/decrypted clientside on the PVE.
 
Last edited:
Thank you for your reply.

There is no official way to encrypt PVE/PBS. You could make use of hardware encryption (enabling SED in case your disks support that) so missing software support isn`t a problem as the encryption is done transparent in hardware on the disks.
Had no problems here so far with ZFS native encryption but it is a pain to set up in case you want to encrypt your root filesystem and to be able to remotely unlock it. And yes, replication won't work. so no option in case youwant to run a ZFS based cluster. For that you would need to use LUKS and ZFS (or whatever filesystem you want) on top.

I don't need remote boot. As a matter of fact, not bootable disk without password is what I want actually. Also, I don't think cluster for now.
And.. I don't expect any money from my office for this, so buying new hardware is not an option :sigh:

So, except remote boot and cluster, is there other pitfall for encryption with PVE?
I have experience with LVM/LUKS encryption on Ubuntu and it worked for me, FYI.

Do you need full system encryption for the PBS? PBS backups already use zero-trust and once you enable encrypted backups these will be encrypted/decrypted clientside on the PVE.

I am not sure... I have to consult with my boss. I think he definitely prefer to full system encryption but may I can convince him with data only encryption.
 
I don't need remote boot. As a matter of fact, not bootable disk without password is what I want actually.
One common option to remotely unlock the root filesystem is to type in your password via SSH using dropbear-initramfs. Especially useful in case there is no webKVM available.

So, except remote boot and cluster, is there other pitfall for encryption with PVE?
It's not encrypting all metadata. Got the benefit that replication, snapshots, scrubs and so on will work while data is still encrypted. But also means that stuff like dataset names will always be plaintext.
 
It's not encrypting all metadata. Got the benefit that replication, snapshots, scrubs and so on will work while data is still encrypted. But also means that stuff like dataset names will always be plaintext.

You mean data disk only encryption case here, right? If I apply full system encryption metadata will be encrypted anyway.

I have just chatted with my boss and it seems okay to encrypt data partition only though, so I think I don't have to encrypt entire PBS system.
On the other hand, for PVE, I am still considering full disk encryption vs data disk only encryption.
Is there merit to encrypt data disk only over full system?
I have no experience with zfs but from the experience of LVM/LUKS, the full system encryption was convenient because once I unlock the root partition/disk during boot, all other partitions/disks are also unlocked automatically.

Do you encrypt whole system or VM/data only? If you encrypt disks separately, how do you unlock those encrypted disks on boot?
 
You mean data disk only encryption case here, right? If I apply full system encryption metadata will be encrypted anyway.
I talk about ZFS native encryption. So if you encrypt the whole pool (full system encrtyption), the data is encrypted (as well as metadata like filenames on that pool) but other metadata like those dataset names won't. If you got a dataset "rpool/TotallyLegalLinuxIsoDownloads" with some downloaded Linux Isos on it, no one will be able to see the data or metadata of those ISOs but everyone will still be able to see the name of that dataset and will be able to guess what you store on it, how much data you stored there, if it is compressed and so on. ;)
 
Last edited:
I talk about ZFS native encryption. So if you encrypt the whole pool (full system encrtyption), the data is encrypted (as well as metadata like filenames on that pool) but other metadata like those dataset names won't. If you got a dataset "rpool/TotallyLegalLinuxIsoDownloads" with some downloaded Linux Isos on it, no one will be able to see the data or metadata of those ISOs but everyone will still be able to see the name of that dataset and will be able to guess what you store on it ;)
I see, It is totally okay because dataset name will be... dataset or storage or disk or whatever general name anyway. Thank you for your advice!
 
As a data point, in our new Proxmox based systems we have the Proxmox OS itself running on mirrored (unencrypted) ZFS disks, with the VMs themselves also running on (unencrypted) ZFS volumes.

Each VM hosted by Proxmox has two virtual disks:
  1. Unencrypted OS disk
  2. LUKS encrypted application and data disk
The 2 disk VM approach here is for reliability and simplicity. When a VM starts up (booting from the unencrypted OS disk) it's a fully contained working Linux system. If something goes wrong, it has all of the standard tools available for debugging and fixing, and full network access.

If we'd instead used dropbear-initramfs, then if there's ever a problem with the bootup we have to attempt debugging from inside a super limited initramfs system, which is a huge pain in the arse and can be super complicated/time consuming.

Anyway, when a VM has finished booting it pulls the LUKS passphrase for its 2nd disk from a remote server (in a remote data center) using an ssh key specific to that VM (and not letting it hit disk at any point).

If a server is somehow removed/stolen/etc, our processes remove the public key of its VMs from the key server so the VMs aren't able to unlock any encrypted volumes.

Seems to work pretty reliably, though only time will tell longer term.
 
Last edited:
  • Like
Reactions: aklausing
Do you encrypt whole system or VM/data only?
Whole system. In case you want to use swap, make sure to also encrypt that with LUKS to not leak sensitive data.
By default, when using ZFS, PVE won't create any swap partition and you shouldn't use a zvol or swap file on a dataset. So if you want to use swap, you best tell the installer to keep some disk space unallocated so you could later manually partition it and create your LUKS encrypted swap partition.

If you encrypt disks separately, how do you unlock those encrypted disks on boot?
Systemd service. See for example here: https://wiki.archlinux.org/title/ZFS#Unlock/Mount_at_boot_time:_systemd

If we'd instead used dropbear-initramfs, then if there's ever a problem with the bootup we have to attempt debugging from inside a super limited initramfs system, which is a huge pain in the arse and can be super complicated/time consuming.
Yes, thats true. Broke that initramfs already two times (need patches to add VLAN/bond capabilities to initramfs so unlocking works when using tagged VLAN or LACP bonds) and it is very annoying to boot a rescue disk, chroot into PVE to be able to fix the initramfs configs to then rebuild the initramfs.
 
Last edited:
  • Like
Reactions: justinclift
By default, when using ZFS, PVE won't create any swap partition and you shouldn't use a zvol or swap file on a dataset.

Can't I use zvol in encrypted pool? As far as I know, each VM's disk is a zvol. Does this mean that I can't create VM in encrypted zfs pool?
 
Last edited:
Yeah, at the moment it's manual. The general monitoring system (Zabbix) will go batshit if we lose a server and all it's VMs though.

There's no chance we wouldn't notice, and probably super quickly. :cool:

That being said, it's definitely something we could automate down the track.
 
I run one server with full encryption except boot partition.

Layout:
1. Needed partitions are encrypted with LUKS
2. ZFS use those partitions as regular partitions.

I use Mandos ( https://www.recompile.se/mandos ) as tool for auto password loader from remote server at boot time before loading ZFS.

If someone steals the server they will not find any password in it and I can sleep at night if server gets unexpected reboot. This way it works more than 7 years.
 
  • Like
Reactions: justinclift
If someone steals the server they will not find any password
First time coming across MandOS - looks interesting. How do you stop stolen server from still receiving password from remote server? Location/IP filtering etc. ?
 
It depend on your situation. I limit by IP address right now. Mandos server and mandos client are in different cities.

You can read introduction here ( https://www.recompile.se/mandos/man/intro.8mandos )

As they say:

Now, of course the initial RAM disk image is not on the encrypted root file system, so anyone who had physical access could take the Mandos client computer offline and read the disk with their own tools to get the authentication keys used by a client. But, by then the Mandos server should notice that the original server has been offline for too long, and will no longer give out the encrypted key. The timing here is the only real weak point, and the method, frequency and timeout of the server’s checking can be adjusted to any desired level of paranoia.
 
That being said, it's definitely something we could automate down the track.
Idly thinking about this a bit more. I wonder if a Qdevice (on the key server) could be told to execute some action whenever it notices a cluster member is out of contact (even briefly)?

For example "rename the key phrase files for host ABC and flag it for someone to manually follow up".

Might or might not be an existing capability of the Qdevice software, but as it's open source it could probably be made to work.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!