Proxmox Full Disk Encryption with ZFS Raid1 on LUKS | A couple last questions

From my limited knowledge on the topic and a bit of ignorant digging, SED is generally considered insecure. SED is not standardised between manufacturers and not implemented with all security standards' requirements so easily broken. Basically it's something to avoid.

If you need a "certified" security, some of the SED drives has offered that for a while too:
https://www.seagate.com/files/www-c...on/en-us/docs/faq-fips-sed-mb605-2-1302us.pdf

SED is (from what I have encountered always) an implementation of AES, which is not unlike your typical LUKS deployment would provide, if you need it to comply with e.g. FIPS, a manufacturer would give you an idea what you are procuring, especially for that "datacentre" feel:
https://apac.kioxia.com/en-apac/business/ssd/solution/security.html

If there's been any instances of a major manufacturer's non-FIPS SED having been compromised, feel free to link it through here.
 
This was the most compelling argument against them that I found:

https://www.reddit.com/r/sysadmin/comments/zg2wza/sed_drives/

They are probably valid points, but some come from misunderstanding of the SED implementation (e.g. the data is encrypted and that provides cryptographic erase capability (so one can do nvme format /dev/xxx–-ses=2 before discarding or repurposing them (e.g. to a different customer).

Of course if you want it to be secure against e.g. (offline) theft, you would need to involve boot time password or better yet, TPM. I agree on the points with AES-NI, etc. but if you have lots of drives, you do not really want your CPU be used for assisting that data encryption at all.

Referencing reddit warriors, you might want to have a look at this thread as well:
https://www.reddit.com/r/Proxmox/comments/16q4ctk/selfencrypting_drives_auto_unlock_and_tpm/

With ZFS dataset encryption, are you unlocking it manually remotely every time after boot (on each node)? What happens (not only) in HA scenario when such node wants to e.g. watchdog reboot?
 
How does TPM protect against offline theft? If you steal the whole box, the TPM comes with the box, doesn’t it?
 
How does TPM protect against offline theft? If you steal the whole box, the TPM comes with the box, doesn’t it?

I took the liberty to make an assumption that offline theft means of the drive (or drives) itself. If you steal the whole box (imagine walking out of a datacentre with 2U like that) then you have entirely different problem. Does everyone concerned about this have dropbear on their boxes today?
 
Does everyone concerned about this have dropbear on their boxes today?
Here, yes. All you have to do to steal my server is to break a window. No tools or ladder are required. You don't even have to enter the building, you could grab it from outside and pull it through the window. You just need some muscles because those servers are fully populated 4U chassis. ;)
Luckily they never broke into my apartment yet but 7 times at the neighbour next door.

And for auto unlocking, you could hide a Raspberry Pi Zero somewhere and let that unlock all the PVE nodes. Autounlocking via dropbear works great via a simple expect script: https://forum.proxmox.com/threads/a...ypted-pve-server-over-lan.125067/#post-546466
Ideally the machine doing the autounlocking is encrypted too and requires a passphrase for unlocking it. But not a big problem in case of theft. If you unplug in from the UPS to steal it it will be locked again. Just make sure to set up NUT to shutdown all nodes after running some seconds on battery so they will be shut down in case they try to steal the whole rack including the UPS ;)
 
Last edited:
Like Dunuin said, not all servers are locked away in secure data centres.
Yes, I've heard of the Raspberry Pi Zero trick.
 
Last edited:
If there's been any instances of a major manufacturer's non-FIPS SED having been compromised, feel free to link it through here.
From rough memory, there have been a few instances of security researchers looking into self encrypting drives and finding critical flaws in their implementation.

Some quick searching turns up these:
The easily findable stuff by me just now seems to be dated 2015, 2018, and 2019. Not seeing anything obvious from 2022, 2023, nor 2024.



The general thought pattern through much of the discussion about this stuff that I'm seeing (and I suspect is true) is that the manufacturers issued patches for the specific implementation flaws found above, but probably didn't change anything about their development practises to ensure future flaws don't happen.



Oh wow. Samsungs consumer website is super ironic in regards to this for their self encrypting drives:

Consumer Notice regarding Samsung SSDs​

https://semiconductor.samsung.com/consumer-storage/support/notice/
In light of recent reporting for potential breach of self-encrypting SSDs ... [We] recommend installing encryption software (freeware available online) that is compatible with your system.
That quote only applies to "non-portable SSDs" though, but those are pretty common in entry level homelabs.
 
Last edited:
  • Like
Reactions: Dunuin and esi_y
All you have to do to steal my server is to break a window

This would be true for most home uses, but physical security and data at rest do not exactly offset each other.

Autounlocking via dropbear works great via a simple expect script: https://forum.proxmox.com/threads/a...ypted-pve-server-over-lan.125067/#post-546466

I appreciate the ingenuity, I remember e.g. some cryptsetup versions allowed for a shell script to be referenced from a line of crypttab, others only keyfile (which could be perhaps fetched into ramdisk prior), so that would allow for less hacky approach even. I would discount the locally "hidden" SBC because if it's cable-connected then it's not really secure against attacker that targets the data and if it was e.g. WiFi it is just a bit more obscure and unreliable. I can still imagine the keys to be located outside over e.g. VPN. Even in that case (as in all previous), you are however inherently open to attacks over the network (which are more likely to target the data than resale equipment value) that have all the keys at their disposal.

Ideally the machine doing the autounlocking is encrypted too and requires a passphrase for unlocking it.

Or if it's remote it can simply deny key access at any point later as it is more likely under your control.

But not a big problem in case of theft.

The whole issue with the elaborate setup above I have is that it fails to address real threat scenario. Namely, if you have such kind of material stored on the nodes that would require this, then it's most certainly insufficient for the very reason that they are plaintext accessible even when not absolutely necessary. That is then the actual issue.

If you unplug in from the UPS to steal it it will be locked again. Just make sure to set up NUT to shutdown all nodes after running some seconds on battery so they will be shut down in case they try to steal the whole rack including the UPS ;)

And that issue will not be inherently solved by adding this kind of complexity which is, let's admit it, at the expense of availability, e.g. the whole point of UPS setup is to be able run even hours off-grid.
 
From my limited knowledge on the topic and a bit of ignorant digging, SED is generally considered insecure. SED is not standardised between manufacturers and not implemented with all security standards' requirements so easily broken. Basically it's something to avoid.
Why you think it is insecure and do you have references?
It is not important if it is standardized, it only means you may have to adapt to the specific model. With sedutil-cli most drives can be used with SED encryption in a uniform way. I boot up a LUKS encrypted Proxmox and then unlock the SEDs with sedutil-cli.

So where do you think is the insecurity in this setup?
 
After countless hours, it works finally!

Super happy :)

Huge thanks to everyone that helped.

What i ended up doing was just switching from systemd-boot to grub.
I just reinstalled it with secure boot enabled so it automatically defaulted to grub.
I used the GRUB-setup for year and a new installation with EFI and run into the same problem. Where the "outdated" GRUB works out of the box the "highly, flexible, new, state-of-the-art" UEFI/systemd boot process does not seem to be able to wait for LUKS decryption before mounting the ZFS.

Is there really not solution to this problem, Proxmox-team?

It seems I have to redo the whole installation now with GRUB, or invest maybe some more hours to find out how to switch back from systemd/UEFI-boot to GRUB boot. It would be nice, if documentation would be added to the Wiki.
 
I disabled UEFI boot in the BIOS. I get a GRUB loader screen but after booting proxmox-boot-tools still days it was bootet with UEFI. It seems to be this third mode.

I think lots of people run systems with weird CSM modes. If you installed it as EFI, it will keep booting that way, subject to what your firmware looks for. I suspect lots of confusion (for the booting system) comes from the existence of the BIOS boot partition, I know at least of some firmware that when they see GPT expect UEFI except when they see BBP, then they fallback to BIOS boot (and ignore what was set by the user).

How can I switch back to plan GRUB, as this seems the only mode working with LUKS and ZFS on root?

I literally do not want to open this can of worms, sorry. :) I understand what you are after, you should be able to simply install GRUB and have it take over, but what that custom boot tool script is doing is beyond me - it tries to cater for everything. I am not a fan of ZFS on root (and if I was I would install it with ZFSBootMenu [1] and not systemd-boot).

Note it is entirely possible to install (without any gymnastics) plain Debian with e.g. LVM on LUKS for the system and then after swapping kernels and all [2] just create ZFS pools for the guests, etc. I find that much more reasonable approach.

And then if you really want something custom and are not afraid of debootsrap, just go your own way [3].

[1] https://docs.zfsbootmenu.org/
[2] https://pve.proxmox.com/wiki/Install_Proxmox_VE_on_Debian_12_Bookworm
[3] https://www.dwarmstrong.org/debian-install-zfs/
 
Last edited:
I just tested in a fresh install in VMWARE Workstation with a plain BIOS GRUB install and the problem is still there, so something changed in the new proxmox ISO.

Until now I could just install with zfs on root normally, setup a LUKS partition on a separate disk and add this as a mirror to the rpool with initramfs option in crypttab (then vice versa) update-initramfs, proxmox-boot-tool init and it worked.

Now is the problem that even though I are asked for a password at boot time, the pool does not get decrypted before zfs trying to mount it. In the initramfs emergency busy box there os no cryptsetup included either. So where could be the problem with the new proxmox version.

As the proxmox installer already supports root on zfs I would prefer using the proxmox standard zfs installation instead of new boot loaders which could lead to new problems. The only problem is, that the already working initramfs luks decryption broke.

How to setup LUKS with ZFS on root on the new version?

I want to keep with ZFS because I already had problems that the boot disk got errors, which are not detected by any other file system. With ZFS in a mirror there is no problem because data corruption is detected, you get a notice, can replace the drive and remirror without having the fear that might have to reinstall because of data corruption.
 
Last edited:
Until now I could just install with zfs on root normally, setup a LUKS partition on a separate disk and add this as a mirror to the rpool with initramfs option in crypttab (then vice versa) update-initramfs, proxmox-boot-tool init and it worked.

Now is the problem that even though I are asked for a password at boot time, the pool does not get decrypted before zfs trying to mount it. In the initramfs emergency busy box there os no cryptsetup included either. So where could be the problem with the new proxmox version.

You are basically discovering that when one system (PVE) does make a feature custom (ZFS) and another system (Debian) supports another feature (LUKS) out-of-the-box, they do not necessarily cater for each others use case, you have to. And the part which would make me most uneasy is that there's the custom scripting like proxmox-boot-tool to do its hocus pocus.

As the proxmox installer already supports root on zfs I would prefer using the proxmox standard zfs installation instead of new boot loaders which could lead to new problems. The only problem is, that the already working initramfs luks decryption broke.

How to setup LUKS with ZFS on root on the new version?

I think you should create new thread with this in the title as someone running the setup might be quicker to help.

I want to keep with ZFS because I already had problems that the boot disk got errors, which are not detected by any other file system.

Any other filesystem can be used with dm-integrity [1] and also, Debian installer allows for BTRFS on LUKS.

[1] https://docs.kernel.org/admin-guide/device-mapper/dm-integrity.html
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!