Proxmox 8 - Luks Encryption question

redjohn

Well-Known Member
Apr 22, 2016
132
4
58
27
Hello everyone,

I have been thinking about fully encrypting my Proxmox 8 server which is located in an external data center. I would like to use Luks so that the encryption password is requested when booting which I can then enter via SSH.

Now I have found the following instructions and have some problems understanding them.

https://blog.berrnd.de/proxmox-auf-verschluesseltem-software-raid-mdraid-lvm-luks

I would like to divide the 1 TB nvme disks as follows.
20 GB for / where the OS Debian (Proxmox) is located
980 GB for /var/lib/vz for the data store local.
2 GB SWAP

I would like to have system and datastore in RAID 1. What do I have to do? First of all, if necessary. I can't quite get on with the instructions.


I have found another possibility here, can I do this with an existing installation? Same parameters as above. Is this still possible?

https://dustri.org/b/hardening-proxmox-against-physical-attacks.html

What would you recommend?
 
Hello everyone,

Has anyone already completely encrypted an existing Proxmox installation?

I have installed Proxmox at the hoster via template as follows:

2 x 960 GB NVME

Partition 1: 20 GB (Proxmox OS)
Partition 2: 950 GB (VMs /var/lib/vz)
Partition 3: 2 GB (SWAP)

Both hard disks are in RAID. (RAID1).

Would you recommend subsequent encryption? Or would a complete installation via KVM console be better?

Thank you!
 
Hello everyone,

Has anyone already completely encrypted an existing Proxmox installation?

I have installed Proxmox at the hoster via template as follows:

2 x 960 GB NVME

Partition 1: 20 GB (Proxmox OS)
Partition 2: 950 GB (VMs /var/lib/vz)
Partition 3: 2 GB (SWAP)

Both hard disks are in RAID. (RAID1).

Would you recommend subsequent encryption? Or would a complete installation via KVM console be better?

Thank you!

Hey, I saw your post also under the tutorial, I will leave that one to the OP to reply if he can, but I wonder why do you want to encrypt it (seriously, what is the intended objective specifically) and how.

Side questions:

1) Do you have idea about the speed of those NVMes, did you run cryptsetup benchmark and fio and see how you would slow that potentially down.

2) The reason for the RAID is performance?

3) This is just single node PVE deployment and will remain so?

4) Do you think you need 1GB SWAP at all and if so, did you consider zram/compcache instead?

5) This is hosted for you, i.e. physically managed by someone else, correct?

The answer to your last question ... I would probably prefer Debian install with whichever drive layout I prefer, then add PVE on top as apt install.
 
Hey, I saw your post also under the tutorial, I will leave that one to the OP to reply if he can, but I wonder why do you want to encrypt it (seriously, what is the intended objective specifically) and how.

Side questions:

1) Do you have idea about the speed of those NVMes, did you run cryptsetup benchmark and fio and see how you would slow that potentially down.

2) The reason for the RAID is performance?

3) This is just single node PVE deployment and will remain so?

4) Do you think you need 1GB SWAP at all and if so, did you consider zram/compcache instead?

5) This is hosted for you, i.e. physically managed by someone else, correct?

The answer to your last question ... I would probably prefer Debian install with whichever drive layout I prefer, then add PVE on top as apt install.
Hi and thanks for your quick feedback!

My idea was to encrypt the host as it is operated in an external data center. Of course, only the host has access to the hard disks. It was just the idea to encrypt the system itself.

Are the instructions from Javex also intended for encrypting Proxmox afterwards? My hoster installs the server via template (2 partitions: Proxmox base system and Proxmox data). The software RAID is for a technical failure of a hard disk.

Now I would like to encrypt the system afterwards and wanted to ask how best to do this.

Thank you very much!
 
Hi and thanks for your quick feedback!

My idea was to encrypt the host as it is operated in an external data center. Of course, only the host has access to the hard disks. It was just the idea to encrypt the system itself.

Are the instructions from Javex also intended for encrypting Proxmox afterwards? My hoster installs the server via template (2 partitions: Proxmox base system and Proxmox data). The software RAID is for a technical failure of a hard disk.

Now I would like to encrypt the system afterwards and wanted to ask how best to do this.

Thank you very much!

Hey!

He wrote it specifically how to do it on an existing install, but basically PVE is just extras on top of Debian with custom kernel. It's all agnostic to the storage you have or have many layers. Someone may want to have LUKS on raw drive, then LVM, another may want to have LVM and LUKS on certain partitions only. The tutorial went to do full-disk encryption (FDE) of an already installed system. It actually nicely includes having dropbear (go search around it by the terms if you don't want sequence read it all:)) in initramfs, which means even if you did not have out-of-band access you could definitely enter password upon startup.

The thing is, if I could have 2 nodes in a cluster rather than 2 NVMes in a RAID on a single node, I would rather choose that as it is the more resilient setup. If you have FDE like LUKS on NVMe I would want to benchmark the NVMe and my machine's encryption speeds as NVMe drives often support SED which offloads the encryption off the system. Especially when you go on about encrypting SWAP (which should be done if you were to be serious about encryption), depending on your overall setup it might become a decent bottleneck.

The thing is, upon reboot, unless you implement some tweaks in the initramfs (like that it goes and fetches the enryption passphrase from outside of the datacentre), you will have to manually enter the password. When you said your hoster installed it for you, I was wondering if you have out of band access (iLO, iDRAC if it's a dedicated server).

It's kind of good idea to have some threat profile in mind, e.g. want to prevent cleartext data leak in case I need to RMA harddrive or not even the datacenter should have any idea what's running here if they tap all the interfaces (that's hard;)) . After that, you go decide what you want to encrypt and how. The easiest (and most practical) is of course to encrypt the extra partitions (other than system), but the tutorial lets you copy your system out then back in after you created LUKS underneath. It's all up to you.

Cheeky note: If you are serious about security, you would not run something from a template you had not created yourself, let alone someone else deployed for you, putting a black/brown-box onto encrypted LUKS serves no additional benefit (not even for the healthy paranoia;)).
 
Hey!

He wrote it specifically how to do it on an existing install, but basically PVE is just extras on top of Debian with custom kernel. It's all agnostic to the storage you have or have many layers. Someone may want to have LUKS on raw drive, then LVM, another may want to have LVM and LUKS on certain partitions only. The tutorial went to do full-disk encryption (FDE) of an already installed system. It actually nicely includes having dropbear (go search around it by the terms if you don't want sequence read it all:)) in initramfs, which means even if you did not have out-of-band access you could definitely enter password upon startup.

The thing is, if I could have 2 nodes in a cluster rather than 2 NVMes in a RAID on a single node, I would rather choose that as it is the more resilient setup. If you have FDE like LUKS on NVMe I would want to benchmark the NVMe and my machine's encryption speeds as NVMe drives often support SED which offloads the encryption off the system. Especially when you go on about encrypting SWAP (which should be done if you were to be serious about encryption), depending on your overall setup it might become a decent bottleneck.

The thing is, upon reboot, unless you implement some tweaks in the initramfs (like that it goes and fetches the enryption passphrase from outside of the datacentre), you will have to manually enter the password. When you said your hoster installed it for you, I was wondering if you have out of band access (iLO, iDRAC if it's a dedicated server).

It's kind of good idea to have some threat profile in mind, e.g. want to prevent cleartext data leak in case I need to RMA harddrive or not even the datacenter should have any idea what's running here if they tap all the interfaces (that's hard;)) . After that, you go decide what you want to encrypt and how. The easiest (and most practical) is of course to encrypt the extra partitions (other than system), but the tutorial lets you copy your system out then back in after you created LUKS underneath. It's all up to you.

Cheeky note: If you are serious about security, you would not run something from a template you had not created yourself, let alone someone else deployed for you, putting a black/brown-box onto encrypted LUKS serves no additional benefit (not even for the healthy paranoia;)).
Hey! And thanks again for the quick feedback! ;)

Okay, I have now booted my system via Debian Live ISO (yes with KVM/iLO). And I would like to at least try to encrypt the system afterwards.

The system has two NVME disks with three partitions each (System, Data and SWAP). Both disks are connected in RAID 1. According to the instructions, how and where do I copy the old system out and then put it back on the encrypted disk? Unfortunately, I do not have an additional disk available.

How would I proceed in such a case to encrypt the existing system? I would like to encrypt both hard disks and the three partitions (System, Data and SWAP).

Thank you!
 
Hey! And thanks again for the quick feedback! ;)

Okay, I have now booted my system via Debian Live ISO (yes with KVM/iLO). And I would like to at least try to encrypt the system afterwards.

The system has two NVME disks with three partitions each (System, Data and SWAP). Both disks are connected in RAID 1. According to the instructions, how and where do I copy the old system out and then put it back on the encrypted disk? Unfortunately, I do not have an additional disk available.

How would I proceed in such a case to encrypt the existing system? I would like to encrypt both hard disks and the three partitions (System, Data and SWAP).

Thank you!

It's not that simple. :) First of all, I assume that is hardware raid controller so whatever you do on the array will happen to both drives.

This is a live system with production things running on it? If not, I would just install it from scratch - I would prepare my partition layout through the LIVE Debian (you can take this part from the tutorial). And then install.

If you want to copy, you have to copy out. If it's fresh system it's not going to be large (maybe 3GB) and you could even copy it out over the network and then back in.

The tutorial basically covers your case. It LUKS encrypt everything except for the boot partition.
 
It's not that simple. :) First of all, I assume that is hardware raid controller so whatever you do on the array will happen to both drives.

This is a live system with production things running on it? If not, I would just install it from scratch - I would prepare my partition layout through the LIVE Debian (you can take this part from the tutorial). And then install.

If you want to copy, you have to copy out. If it's fresh system it's not going to be large (maybe 3GB) and you could even copy it out over the network and then back in.

The tutorial basically covers your case. It LUKS encrypt everything except for the boot partition.
Hello, in the meantime I have read up again and installed a test with Proxmox and ZFS. ZFS also offers encryption. Unfortunately, the Proxmox installer does not offer direct encryption during the installation, so I have to adjust the whole thing afterwards. Any experience with ZFS? (The system has 64 GB RAM for information and no there is no Hardware Raid Controller only Software RAID possible!). I have found the following instructions: https://privsec.dev/posts/linux/using-native-zfs-encryption-with-proxmox/

In the standard Proxmox installation with ZFS, the system unfortunately only creates a large partition for me, but I would like to install the Proxmox operating system on a smaller "root" partition, is this also possible via the Proxmox installer? I was able to select ZFS RAID1 but not how big the partitions should be like with a normal RAID1 installation.


(no, the machine is not yet productive ;) )
 
Hello, in the meantime I have read up again and installed a test with Proxmox and ZFS. ZFS also offers encryption. Unfortunately, the Proxmox installer does not offer direct encryption during the installation, so I have to adjust the whole thing afterwards. Any experience with ZFS? (The system has 64 GB RAM for information and no there is no Hardware Raid Controller only Software RAID possible!). I have found the following instructions: https://privsec.dev/posts/linux/using-native-zfs-encryption-with-proxmox/

In the standard Proxmox installation with ZFS, the system unfortunately only creates a large partition for me, but I would like to install the Proxmox operating system on a smaller "root" partition, is this also possible via the Proxmox installer? I was able to select ZFS RAID1 but not how big the partitions should be like with a normal RAID1 installation.


(no, the machine is not yet productive ;) )

Yes, but I do not like ZFS for the root (personal preference) itself, so I do not usually go with the ISO installer (it also does not have LUKS option there). I did try to suggest you fresh install Debian and pre-partition the drive when on the Live system for a reason - it is by all means the simplest option possible. It is definitely easy to mount small netinstall ISO of Debian and let it install minimal on basically a partition table that you pre-created where you e.g. leave at least 10GB for root (you can have that on LVM, then maybe give 20-30GB as you may want to do things with it later including put your SWAP there even resize it), 0.5-1GB for boot (this will not be encrypted) as per the turorial (+EFI partition unless it's BIOS system). Once you have the partition layout in place, in Debian installer you just install it pointing to where /boot and / is ... once in, you add ZFS support and make use of the remaining space (you can create that as extra partition in fdisk and from there it's all ZFS). Whether you want to use ZFS encryption without LUKS or LUKS underneath is really your call, I prefer LUKS (with ZFS not ecrypting anything).

ZFS install with root on ZFS and ZFS encryption .. it's for sure doable but has nothing to do with the tutorial and definitely not Debian out of the box setup (Ubuntu tried that - without encryption - and gave up) ... I did not like it for maintainability's sake.

But I definitely could encourage:
1) Live boot, partition 0.5G EFI, 1G boot, rest of drive LUKS (around 30GB in will be LVM or more if you want, rest left for ZFS, or if you go with LVM only, then all of it LVM)
2) Inside LUKS, make LVM e.g. 30GB - I like to split / from /var and /tmp, some people like to split off /home but I do not see the point on hypervisor
3) Still inside LUKS made the rest extra partition for ZFS (NOT LVM!)

I think you have to first really decide what you want to end up with, there's many variables and I really have all along in the back of my mind ... is it a good idea on NVMe when you did not even try to check if those are not e.g. SED drives and what's your result of cryptsetup benchmark test.
 
Last edited:
In the standard Proxmox installation with ZFS, the system unfortunately only creates a large partition for me, but I would like to install the Proxmox operating system on a smaller "root" partition, is this also possible via the Proxmox installer?

This is not how ZFS works, ZFS is basically partition manager + file system in one. You give it the whole chunk of space you want to give (pool) to it and then the created "paritions" (called datasets) just take up what they take up. So if you see your root of PVE within ZFS it's not taking up entire drive, it is a dataset of the ZFS which takes probably like 2GB only (and can take up more if you don't set quotas up to the pool's size).

You have multiple things to explore at the same time. E.g. learn about ZFS (it's very different way of doing thigs, great for VMs/CTs). Learn about LUKS, learn about booting LUKS and how to layer it. That's already enough for a start. :) I would however - if it was my system - want to know what does cryptsetup benchmark show me and what are fio results of my NVMes. Before I put it into any particular setup.

EDIT: Out of the two: mdadm and zfs, I would go with ZFS. Btw LVM can create mirrored volumes too.
 
Last edited:
Yes, but I do not like ZFS for the root (personal preference) itself, so I do not usually go with the ISO installer (it also does not have LUKS option there). I did try to suggest you fresh install Debian and pre-partition the drive when on the Live system for a reason - it is by all means the simplest option possible. It is definitely easy to mount small netinstall ISO of Debian and let it install minimal on basically partition table that you pre-created where you e.g. leave at least 10GB for root (you can have that on LVM, then maybe give 20-30GB as you may want to do things with it later including put your SWAP there even resize it), 0.5-1GB for boot (this will not be encrypted) as per the turorial (+EFI partition unless it's BIOS system). Once you have the partition layout in place, in Debian installer you just install it pointing to where /boot and / is ... once in, you add ZFS support and make use of the remaining space (you can create that as extra partition in fdisk and from there it's all ZFS). Whether you want to use ZFS encryption without LUKS or LUKS underneath is really your call, I prefer LUKS (with ZFS not ecrypting anything).

ZFS install with root on ZFS and ZFS encryption .. it's for sure doable but has nothing to do with the tutorial and definitely not Debian out of the box setup (Ubuntu tried that - without encryption - and gave up) ... I did not like it for maintainability's sake.

But I definitely could encourage:
1) Live boot, partition 0.5G EFI, 1G boot, rest of drive LUKS (around 30GB in will be LVM or more if you want, rest left for ZFS, or if you go with LVM only, then all of it LVM)
2) Inside LUKS, make LVM e.g. 30GB - I like to split / from /var and /tmp, some people like to split off /home but I do not see the point on hypervisor
3) Still inside LUKS made the rest extra partition for ZFS (NOT LVM!)

I think you have to first really decide what you want to end up with, there's many variables and I really have all along in the back of my mind ... is it a good idea on NVMe when you did not even try to check if those are not e.g. SED drives and what's your result of cryptsetup benchmark test.
Hey, okay, I get it. Now I install a Debian Live System and try to prepare the partitions. They should look like you described:
0.5 G EFI, 1 G Boot, 20 G Proxmox System, Rest VMs and Data /var/lib/vz

And I would like to create a RAID1 across both disks for the failover of one disk. Finally, I would like to encrypt 20 G Proxmox and Rest VMs and Data /var/lib/vz using LUKS. How do you follow it now:


Bildschirmfoto 2023-11-27 um 21.45.14.png



This that configuration right? Sorry for the stupid question :D.

On my old system it looks like this (template installation hoster)


Bildschirmfoto 2023-11-27 um 21.40.25.png


I can do the installation via a KVM, do I still have the possibility to change the LUKS key later without having to enter it via the KVM? Or can I only set up the dropbear after the installation?

Yes, the disks are WD NVMEs Datacenter. I have checked that.

EDIT: At the end I have some error message "No EFI Partition was found but I was able to skip it. On my old system I find an EFI partition but not in the RAID and only on the first NVME. I have now created the RAID1 with mdadm. Can I also install ZFS within the Live Debian ISO or do I do this afterwards?
 
Last edited:
Hey, okay, I get it. Now I install a Debian Live System and try to prepare the partitions. They should look like you described:
0.5 G EFI, 1 G Boot, 20 G Proxmox System, Rest VMs and Data /var/lib/vz

And I would like to create a RAID1 across both disks for the failover of one disk. Finally, I would like to encrypt 20 G Proxmox and Rest VMs and Data /var/lib/vz using LUKS. How do you follow it now:


View attachment 58912



This that configuration right? Sorry for the stupid question :D.

On my old system it looks like this (template installation hoster)


View attachment 58911


I can do the installation via a KVM, do I still have the possibility to change the LUKS key later without having to enter it via the KVM? Or can I only set up the dropbear after the installation?

Yes, the disks are WD NVMEs Datacenter. I have checked that.

I see you want a step by step. :) But I cannot do that. For one, how exactly (where) are you setting up the RAID? This is software but done in iLO?

The moment you go with RAID like this, you should basically not be using ZFS, it kind of makes no sense in that setup. That's fine, you can totally have it on LVM as thin volume (standard PVE install does that), but just let that be clear.

The reason I am asking about the drives all along is ... if you even live-boot ... and get to run (literally on the command line) cryptsetup benchmark ... it will show you how fast the encryption will be on that system, well, for different encryption algos anyways. You can then choose a one which makes most sense performance / security - wise (and use it as an option in luksFormat command later). I never set up LUKS without looking at this first, definitely not with NVMe drives.

If you know your NVMe drive types exactly (and believe manufacturer specs) or run couple of fio tests especially with higher queue dept, you better compare that with the LUKS benchmark results. Because what's the point of having fast NVMes when you slow them down with the LUKS?

Check https://www.trentonsystems.com/blog/self-encrypting-drives if your NVMes support it instead then.
 
I see you want a step by step. :) But I cannot do that. For one, how exactly (where) are you setting up the RAID? This is software but done in iLO?

The moment you go with RAID like this, you should basically not be using ZFS, it kind of makes no sense in that setup. That's fine, you can totally have it on LVM as thin volume (standard PVE install does that), but just let that be clear.

The reason I am asking about the drives all along is ... if you even live-boot ... and get to run (literally on the command line) cryptsetup benchmark ... it will show you how fast the encryption will be on that system, well, for different encryption algos anyways. You can then choose a one which makes most sense performance / security - wise (and use it as an option in luksFormat command later). I never set up LUKS without looking at this first, definitely not with NVMe drives.

If you know your NVMe drive types exactly (and believe manufacturer specs) or run couple of fio tests especially with higher queue dept, you better compare that with the LUKS benchmark results. Because what's the point of having fast NVMes when you slow them down with the LUKS?

Check https://www.trentonsystems.com/blog/self-encrypting-drives if your NVMes support it instead then.
Hey,

Yes exactly, I booted from a Debian Live CD via KVM as recommended by you above. Then I created partitions on both NVMEs and put a RAID 1 over both hard disks.

All right, I'll check my NVME hard disks right after the installation. Another question:

- An installation without encryption with ZFS as file system would bring the advantages of ZFS correct? However, this is a Proxmox Single Server that will not be expanded for the time being. I would then only benefit from the compression and co. correct?

- From this point of view, would it be better to use ZFS or LVM? ZFS needs some RAM as I know.


Finally, one conceivable option would be
1 GB boot partition
0.5 GB EFI
20 GB Proxmox

Encrypt the 20 GB in LVM and possibly with LUKS.
Configure the remaining space after installation with ZFS and use it to store the CTs and VMs, correct? ;-) Only encrypt the rest of space with LUKS but not add to LVM. Correct?



Finally, what would you recommend on an external Proxmox system in an external data center? Does it make sense to encrypt the data? (of course after checking the hard disks ;) ). Or in this case (single server) would it be better to use an LVM with RAID1 mdadm?
 
Last edited:
Hey,

Yes exactly, I booted from a Debian Live CD via KVM as recommended by you above. Then I created partitions on both NVMEs and put a RAID 1 over both hard disks.

All right, I'll check my NVME hard disks right after the installation. Another question:

- An installation without encryption with ZFS as file system would bring the advantages of ZFS correct? However, this is a Proxmox Single Server that will not be expanded for the time being. I would then only benefit from the compression and co. correct?

- From this point of view, would it be better to use ZFS or LVM? ZFS needs some RAM as I know.


Finally, one conceivable option would be
1 GB boot partition
0.5 GB EFI
20 GB Proxmox

Encrypt the 20 GB in LVM and possibly with LUKS.
Configure the remaining space after installation with ZFS and use it to store the CTs and VMs, correct? ;-) Only encrypt the rest of space with LUKS but not add to LVM. Correct?



Finally, what would you recommend on an external Proxmox system in an external data center? Does it make sense to encrypt the data? (of course after checking the hard disks ;) ). Or in this case (single server) would it be better to use an LVM with RAID1 mdadm?

I really would like to help you, but it is not possible in a coherent way - too many moving variables. I am not sure what you were creating the partitions individually on (if you press on with RAID, that should be there first).

Yes I recommended you partition yourself (and easy to do from a live system) and recommended you install Debian and PVE on top, there's no harm doing that anytime you want anything more/different than the standard ISO provides.

You went ahead with HP Array and setting up RAID1 for 2 disks - that already I cannot tell what impact it might have on the NVMe performance, I would test before and after. If I was very stubborn with classical RAID (as the HP does it anyways in software) I would do it via mdadm as it makes the drives more portable if need be later.

I am afraid I lost you the moment you go on with the HP array sw RAID, from that point the truth is still:
- you can do and benefit from LUKS (but it will likely adversely affect your performance and is completely unnecessary if your NVMes have e.g. TCG Opal support (for reasonable security needs))
- you can do LVM (and thin pool anyways)
- it makes little sense to do ZFS anything
- none of the above has much to do with single node or cluster setup

ZFS related questions - be it without the built in encryption ... the way we explored it that would have been all on top of LUKS (but not RAID) so that is that as for security. Advantages of ZFS would be snaphots for example, data integrity (you may argue you get that with RAID too) and the features you get with the ZFS datasets (thin provisioning by definition), but also tools like zfs send/receive to another location, etc. But I feel like it's not worth exploring the moment you are putting HP RAID on it. (It is technically possible to have ZFS on a single GPT partition that happens to be on a RAID array, but if there's anyone doing that here, please chip in.)

I think you have to start thinking of all that (this is why I cannot answer your other questions directly) in layers. You have (if you use it) RAID on top of bare metal, then you have the drives (or the array), there's a possiblity to do mdadm RAID there (lots of people prefer this if they cannot have hw RAID), now you have a block device (or devices, if you did not put them in RAID). What you do on that block device in which order is entirely up to you. Most people who want FULL DISK encryption want to put LUKS on the raw device, albeit probably in a GPT partition at the least (so that something else does not overwrite it). Now the chicken and egg problem being, how to boot off encrypted disk. So yeah, unless you have a separate one for boot partition where initramfs is, you need at least boot partition unencrypted. That makes it more into EFI and boot partitions on GPT table plain and the rest you would make one partition and LUKS over it. This is however not a must, you can go on partitioning it, like with LVM and simply put LUKS on those individual block devices. How you all layer this is entirely up to you. ZFS is a bit special because datasets are not block devices (except zvols, but I do not want to go there now). So to make it all simple you want to typically put LUKS as low as possible.

Ideal scenario, totally feasible btw - put that EFI and boot partition on something like SATA DOM or SD Card or internal USB to spring off. Then your whole main drives can be LUKS encrypted. But we would go full circle here, because then you can have those two drives really ZFS (and yes they will contain root within).

You can go play round and see how your lsblk output looks like, come back and post it and ask what can be a problem with that, it's easier to answer that kind of question.

To your last question, given that those are NVMes, I would first really check if they are not already SED and if I can have that on, I would rather use it (than LUKS). I would also not consider the hypervisor that critical to be encrypted, as in the OS partition at all. In fact, you might simply consider if it is not best just have LUKS inside individual VMs that you consider sensitive data.

My concern in these cases would mostly be what happens to my data when those NVMes e.g. are going to get discarded. But that's really the SED there more than enough for that.
 
I really would like to help you, but it is not possible in a coherent way - too many moving variables. I am not sure what you were creating the partitions individually on (if you press on with RAID, that should be there first).

Yes I recommended you partition yourself (and easy to do from a live system) and recommended you install Debian and PVE on top, there's no harm doing that anytime you want anything more/different than the standard ISO provides.

You went ahead with HP Array and setting up RAID1 for 2 disks - that already I cannot tell what impact it might have on the NVMe performance, I would test before and after. If I was very stubborn with classical RAID (as the HP does it anyways in software) I would do it via mdadm as it makes the drives more portable if need be later.

I am afraid I lost you the moment you go on with the HP array sw RAID, from that point the truth is still:
- you can do and benefit from LUKS (but it will likely adversely affect your performance and is completely unnecessary if your NVMes have e.g. TCG Opal support (for reasonable security needs))
- you can do LVM (and thin pool anyways)
- it makes little sense to do ZFS anything
- none of the above has much to do with single node or cluster setup

ZFS related questions - be it without the built in encryption ... the way we explored it that would have been all on top of LUKS (but not RAID) so that is that as for security. Advantages of ZFS would be snaphots for example, data integrity (you may argue you get that with RAID too) and the features you get with the ZFS datasets (thin provisioning by definition), but also tools like zfs send/receive to another location, etc. But I feel like it's not worth exploring the moment you are putting HP RAID on it. (It is technically possible to have ZFS on a single GPT partition that happens to be on a RAID array, but if there's anyone doing that here, please chip in.)

I think you have to start thinking of all that (this is why I cannot answer your other questions directly) in layers. You have (if you use it) RAID on top of bare metal, then you have the drives (or the array), there's a possiblity to do mdadm RAID there (lots of people prefer this if they cannot have hw RAID), now you have a block device (or devices, if you did not put them in RAID). What you do on that block device in which order is entirely up to you. Most people who want FULL DISK encryption want to put LUKS on the raw device, albeit probably in a GPT partition at the least (so that something else does not overwrite it). Now the chicken and egg problem being, how to boot off encrypted disk. So yeah, unless you have a separate one for boot partition where initramfs is, you need at least boot partition unencrypted. That makes it more into EFI and boot partitions on GPT table plain and the rest you would make one partition and LUKS over it. This is however not a must, you can go on partitioning it, like with LVM and simply put LUKS on those individual block devices. How you all layer this is entirely up to you. ZFS is a bit special because datasets are not block devices (except zvols, but I do not want to go there now). So to make it all simple you want to typically put LUKS as low as possible.

Ideal scenario, totally feasible btw - put that EFI and boot partition on something like SATA DOM or SD Card or internal USB to spring off. Then your whole main drives can be LUKS encrypted. But we would go full circle here, because then you can have those two drives really ZFS (and yes they will contain root within).

You can go play round and see how your lsblk output looks like, come back and post it and ask what can be a problem with that, it's easier to answer that kind of question.

To your last question, given that those are NVMes, I would first really check if they are not already SED and if I can have that on, I would rather use it (than LUKS). I would also not consider the hypervisor that critical to be encrypted, as in the OS partition at all. In fact, you might simply consider if it is not best just have LUKS inside individual VMs that you consider sensitive data.

My concern in these cases would mostly be what happens to my data when those NVMes e.g. are going to get discarded. But that's really the SED there more than enough for that.
Hello,

I have been reading for quite a while now and trying to gather more knowledge.

My current plan is as follows and I would like to know whether it sounds plausible and logical.

I would like to install Proxmox on a Debian Live CD and split the partitions as follows:

20 GB (Proxmox OS RAID 1 mdadm) /root LUKS encrypted EXT4

1 GB /boot (RAID 1 mdadm)unencrypted

2 GB SWAP (RAID 1 mdadm) LUKS encrypted

512 MB EFI partition

950 GB (Proxmox Data) /var/lib/vz LUKS encrypted but as ZFS file system and use the own ZFS function for RAID 1. (i.e. NOT in the mdadm RAID!)

Is this a viable solution? Or does it make no sense to use LUKS and ZFS? Is it better to use the ZFS encryption that ZFS itself provides?

---------------------------------------------
If the solution described above is not recommended, I would proceed as follows.
Otherwise I would set up the system completely in ZFS so that I have a /root and a /data ZFS point and limit this accordingly so that these cannot occupy the full resources.
 
Alright, I would not call this a tutorial, but it's a starting point:

USING LIVE IMAGE ISO: https://cdimage.debian.org/debian-c...-hybrid/debian-live-12.2.0-amd64-standard.iso

THIS WILL BE DESTRUCTIVE TO YOUR DRIVES.

Got into the live system first:

Code:
$ sudo su
# cd
# apt install gdisk dosfstools cryptsetup

Now I have sda and sdb drives (you will have nvmes there) and create partitions for EFI, boot, LVM of OS and future anything (ZFS for now):
Code:
# sgdisk -Z /dev/sda
# sgdisk -n 1:0:+512M -t 1:ef00 -c 1:"efi" /dev/sda
# sgdisk -n 2:0:+1G -t 2:fd00 -c 2:"boot" /dev/sda
# sgdisk -n 3:0:+24G -t 3:fd00 -c 3:"crypt_lvm_os" /dev/sda
# sgdisk -n 4:0:0 -t 4:bf01 -c 4:"zpool_member" /dev/sda

If you want to know the partition type codes and see partition table now:
Code:
# sgdisk -L

# sgdisk -p /dev/sda

Or a nicer way:
Code:
# lsblk -o +PARTLABEL,SIZE,FSTYPE,LABEL

NAME   MAJ:MIN RM  SIZE RO TYPE MOUNTPOINTS                                    PARTLABEL     SIZE FSTYPE   LABEL
loop0    7:0    0  981M  1 loop /usr/lib/live/mount/rootfs/filesystem.squashfs               981M squashfs
                                /run/live/rootfs/filesystem.squashfs                              
sda      8:0    0  100G  0 disk                                                              100G
├─sda1   8:1    0  512M  0 part                                                efi           512M
├─sda2   8:2    0    1G  0 part                                                boot            1G
├─sda3   8:3    0   24G  0 part                                                crypt_lvm_os   24G
└─sda4   8:4    0 74.5G  0 part                                                zpool_member 74.5G
sdb      8:16   0  100G  0 disk                                                              100G
├─sdb1   8:17   0  512M  0 part                                                efi           512M
├─sdb2   8:18   0    1G  0 part                                                boot            1G
├─sdb3   8:19   0   24G  0 part                                                crypt_lvm_os   24G
└─sdb4   8:20   0 74.5G  0 part                                                zpool_member 74.5G
sr0     11:0    1  1.4G  0 rom  /usr/lib/live/mount/medium                                   1.4G iso9660  d-live 12.2.0 st amd64
                                /run/live/medium

You have to do the same for the second drive as well (sdb in my case).

Now the RAID:
Code:
# mdadm --create /dev/md/boot --level=1 --raid-disks=2 /dev/sda2 /dev/sdb2
# mdadm --create /dev/md/crypt_lvm_os --level=1 --raid-disks=2 /dev/sda3 /dev/sdb3

At any point, you may want to check what's happening with lsblk.

Now the LUKS:
Code:
# cryptsetup -y -v luksFormat /dev/md/crypt_lvm_os
# cryptsetup luksOpen /dev/md/crypt_lvm_os lvm_os

Now the LVM (will contain root and swap for now) inside the LUKS:
Code:
# pvcreate /dev/mapper/lvm_os
# vgcreate vg_os /dev/mapper/lvm_os
# lvcreate -Z y -L 8GB --name root vg_os
# lvcreate -Z y -L 4GB --name swap vg_os
The extra space is for LVM snaphot if you were to make live system backup or basically anticipating more needed for swap or extra /var or /tmp.

One last check:
Code:
sda                  8:0    0  100G  0 disk                                                               100G          
├─sda1               8:1    0  512M  0 part                                                 efi           512M vfat    
├─sda2               8:2    0    1G  0 part                                                 boot            1G linux_raid_member debian:boot
│ └─md127            9:127  0 1022M  0 raid1                                                             1022M          
├─sda3               8:3    0   24G  0 part                                                 crypt_lvm_os   24G linux_raid_member debian:crypt_lvm_os
│ └─md126            9:126  0   24G  0 raid1                                                               24G crypto_LUKS
│   └─lvm_os       253:0    0   24G  0 crypt                                                               24G LVM2_member
│     ├─vg_os-root 253:1    0    8G  0 lvm                                                                  8G          
│     └─vg_os-swap 253:2    0    4G  0 lvm                                                                  4G          
└─sda4               8:4    0 74.5G  0 part                                                 zpool_member 74.5G          
sdb                  8:16   0  100G  0 disk                                                               100G          
├─sdb1               8:17   0  512M  0 part                                                 efi           512M          
├─sdb2               8:18   0    1G  0 part                                                 boot            1G linux_raid_member debian:boot
│ └─md127            9:127  0 1022M  0 raid1                                                             1022M          
├─sdb3               8:19   0   24G  0 part                                                 crypt_lvm_os   24G linux_raid_member debian:crypt_lvm_os
│ └─md126            9:126  0   24G  0 raid1                                                               24G crypto_LUKS
│   └─lvm_os       253:0    0   24G  0 crypt                                                               24G LVM2_member
│     ├─vg_os-root 253:1    0    8G  0 lvm                                                                  8G          
│     └─vg_os-swap 253:2    0    4G  0 lvm                                                                  4G          
└─sdb4               8:20   0 74.5G  0 part                                                 zpool_member 74.5G

And (but do not remove the installation ISO):
Code:
# reboot

Get to the installer, -> Advanced options -> (text for me) -> expert install ...
In the -> Installer components -> crypto-dm, fdisk, rescue
FAST FORWARD to just before ->Detect disks but Execute a shell instead:

Code:
# mdadm --assemble --scan
# cryptsetup luksOpen /dev/md/crypt_lvm_os lvm_os
# exit

NOW BACK TO ->Detect disks and then ->Partition disks -> Manual
You should see everything already there, LUKS crypt open and LVM parts available.
You just need to set mountpoints and partition uses for the installer:
- Under LVM VG vg_os LV root #1 - select to Use as: Ext4, Mountpoint: / - then Done
- Under LVM VG vg_os LV swap #1 - select to Use as: swap area - then Done
- Under RAID device the 1GB partition Use as: Ext2, Mountpoint: /boot - then ->Done
- The ESP should have been already recognized and selected to Use as: UEFI
And -> Finish.
Use contrib software during rest of install so that quicker to get ZFS on board later.

And ... bummer ... got dropped to initramfs ... anyhow for now manually had to:
Code:
# cryptsetup luksOpen /dev/md/crypt_lvm_os lvm_os
# vgchange -ay
# exit

And booted in. Have to go now, this should be fixable later on (missing entry in /etc/crypttab), also the EFI would be worth cloning.

You can go installing PVE on top (that's documented) and test. And search around how to fix your initramfs and add ZFS. ;)
 
Last edited:
Alright, I would not call this a tutorial, but it's a starting point:

USING LIVE IMAGE ISO: https://cdimage.debian.org/debian-c...-hybrid/debian-live-12.2.0-amd64-standard.iso

THIS WILL BE DESTRUCTIVE TO YOUR DRIVES.

Got into the live system first:

Code:
$ sudo su
# cd
# apt install gdisk dosfstools cryptsetup

Now I have sda and sdb drives (you will have nvmes there) and create partitions for EFI, boot, LVM of OS and future anything (ZFS for now):
Code:
# sgdisk -Z /dev/sda
# sgdisk -n 1:0:+512M -t 1:ef00 -c 1:"efi" /dev/sda
# sgdisk -n 2:0:+1G -t 2:fd00 -c 2:"boot" /dev/sda
# sgdisk -n 3:0:+24G -t 3:fd00 -c 3:"crypt_lvm_os" /dev/sda
# sgdisk -n 4:0:0 -t 4:bf01 -c 4:"zpool_member" /dev/sda

If you want to know the partition type codes and see partition table now:
Code:
# sgdisk -L

# sgdisk -p /dev/sda

Or a nicer way:
Code:
# lsblk -o +PARTLABEL,SIZE,FSTYPE,LABEL

NAME   MAJ:MIN RM  SIZE RO TYPE MOUNTPOINTS                                    PARTLABEL     SIZE FSTYPE   LABEL
loop0    7:0    0  981M  1 loop /usr/lib/live/mount/rootfs/filesystem.squashfs               981M squashfs
                                /run/live/rootfs/filesystem.squashfs                            
sda      8:0    0  100G  0 disk                                                              100G
├─sda1   8:1    0  512M  0 part                                                efi           512M
├─sda2   8:2    0    1G  0 part                                                boot            1G
├─sda3   8:3    0   24G  0 part                                                crypt_lvm_os   24G
└─sda4   8:4    0 74.5G  0 part                                                zpool_member 74.5G
sdb      8:16   0  100G  0 disk                                                              100G
├─sdb1   8:17   0  512M  0 part                                                efi           512M
├─sdb2   8:18   0    1G  0 part                                                boot            1G
├─sdb3   8:19   0   24G  0 part                                                crypt_lvm_os   24G
└─sdb4   8:20   0 74.5G  0 part                                                zpool_member 74.5G
sr0     11:0    1  1.4G  0 rom  /usr/lib/live/mount/medium                                   1.4G iso9660  d-live 12.2.0 st amd64
                                /run/live/medium

You have to do the same for the second drive as well (sdb in my case).

Now the RAID:
Code:
# mdadm --create /dev/md/boot --level=1 --raid-disks=2 /dev/sda2 /dev/sdb2
# mdadm --create /dev/md/crypt_lvm_os --level=1 --raid-disks=2 /dev/sda3 /dev/sdb3

At any point, you may want to check what's happening with lsblk.

Now the LUKS:
Code:
# cryptsetup -y -v luksFormat /dev/md/crypt_lvm_os
# cryptsetup luksOpen /dev/md/crypt_lvm_os lvm_os

Now the LVM (will contain root and swap for now) inside the LUKS:
Code:
# pvcreate /dev/mapper/lvm_os
# vgcreate vg_os /dev/mapper/lvm_os
# lvcreate -Z y -L 8GB --name root vg_os
# lvcreate -Z y -L 4GB --name swap vg_os
The extra space is for LVM snaphot if you were to make live system backup or basically anticipating more needed for swap or extra /var or /tmp.

One last check:
Code:
sda                  8:0    0  100G  0 disk                                                               100G        
├─sda1               8:1    0  512M  0 part                                                 efi           512M vfat  
├─sda2               8:2    0    1G  0 part                                                 boot            1G linux_raid_member debian:boot
│ └─md127            9:127  0 1022M  0 raid1                                                             1022M        
├─sda3               8:3    0   24G  0 part                                                 crypt_lvm_os   24G linux_raid_member debian:crypt_lvm_os
│ └─md126            9:126  0   24G  0 raid1                                                               24G crypto_LUKS
│   └─lvm_os       253:0    0   24G  0 crypt                                                               24G LVM2_member
│     ├─vg_os-root 253:1    0    8G  0 lvm                                                                  8G        
│     └─vg_os-swap 253:2    0    4G  0 lvm                                                                  4G        
└─sda4               8:4    0 74.5G  0 part                                                 zpool_member 74.5G        
sdb                  8:16   0  100G  0 disk                                                               100G        
├─sdb1               8:17   0  512M  0 part                                                 efi           512M        
├─sdb2               8:18   0    1G  0 part                                                 boot            1G linux_raid_member debian:boot
│ └─md127            9:127  0 1022M  0 raid1                                                             1022M        
├─sdb3               8:19   0   24G  0 part                                                 crypt_lvm_os   24G linux_raid_member debian:crypt_lvm_os
│ └─md126            9:126  0   24G  0 raid1                                                               24G crypto_LUKS
│   └─lvm_os       253:0    0   24G  0 crypt                                                               24G LVM2_member
│     ├─vg_os-root 253:1    0    8G  0 lvm                                                                  8G        
│     └─vg_os-swap 253:2    0    4G  0 lvm                                                                  4G        
└─sdb4               8:20   0 74.5G  0 part                                                 zpool_member 74.5G

And (but do not remove the installation ISO):
Code:
# reboot

Get to the installer, -> Advanced options -> (text for me) -> expert install ...
In the -> Installer components -> crypto-dm, fdisk, rescue
FAST FORWARD to just before ->Detect disks but Execute a shell instead:

Code:
# mdadm --assemble --scan
# cryptsetup luksOpen /dev/md/crypt_lvm_os lvm_os
# exit

NOW BACK TO ->Detect disks and then ->Partition disks -> Manual
You should see everything already there, LUKS crypt open and LVM parts available.
You just need to set mountpoints and partition uses for the installer:
- Under LVM VG vg_os LV root #1 - select to Use as: Ext4, Mountpoint: / - then Done
- Under LVM VG vg_os LV swap #1 - select to Use as: swap area - then Done
- Under RAID device the 1GB partition Use as: Ext2, Mountpoint: /boot - then ->Done
- The ESP should have been already recognized and selected to Use as: UEFI
And -> Finish.
Use contrib software during rest of install so that quicker to get ZFS on board later.

And ... bummer ... got dropped to initramfs ... anyhow for now manually had to:
Code:
# cryptsetup luksOpen /dev/md/crypt_lvm_os lvm_os
# vgchange -ay
# exit

And booted in. Have to go now, this should be fixable later on (missing entry in /etc/crypttab), also the EFI would be worth cloning.

You can go installing PVE on top (that's documented) and test. And search around how to fix your initramfs and add ZFS. ;)
Hey!

First of all, thank you very much!

I have now carried out the installation accordingly, below is a screenshot of my partitions.

Code:
NAME            MAJ:MIN RM   SIZE RO TYPE  MOUNTPOINTS
sda               8:0    1     0B  0 disk
sr0              11:0    1  1024M  0 rom  
nvme1n1         259:0    0 953.9G  0 disk
├─nvme1n1p1     259:2    0   510M  0 part
├─nvme1n1p2     259:3    0     1G  0 part
│ └─md0           9:0    0     1G  0 raid1 /boot
├─nvme1n1p3     259:4    0  22.1G  0 part
│ └─md1           9:1    0  22.1G  0 raid1
│   └─md1_crypt 252:0    0    22G  0 crypt
│     ├─vg-swap 252:1    0     2G  0 lvm   [SWAP]
│     └─vg-os   252:2    0    20G  0 lvm   /
└─nvme1n1p4     259:5    0 930.3G  0 part
nvme0n1         259:1    0 953.9G  0 disk
├─nvme0n1p1     259:6    0   510M  0 part  /boot/efi
├─nvme0n1p2     259:7    0     1G  0 part
│ └─md0           9:0    0     1G  0 raid1 /boot
├─nvme0n1p3     259:8    0  22.1G  0 part
│ └─md1           9:1    0  22.1G  0 raid1
│   └─md1_crypt 252:0    0    22G  0 crypt
│     ├─vg-swap 252:1    0     2G  0 lvm   [SWAP]
│     └─vg-os   252:2    0    20G  0 lvm   /
└─nvme0n1p4     259:9    0 930.3G  0 part

I now have a RAID1 and an LVM.
The Proxmox OS and the SWAP are inside the LVM. The LVM is encrypted using LUKS. I am asked for the key when booting, Dropbear is already running and everything works :)

Now I have the two following partitions here:
nvme1n1p4 259:5 0 930.3G 0 part and nvme0n1p4 259:9 0 930.3G 0 part
These have not yet been mounted and are not in use.

I have now also encrypted these two partitions using LUKS.
Code:
sudo cryptsetup luksFormat -c aes-xts-plain64 -s 512 -h sha512 -y nvme1n1p4
 sudo cryptsetup luksFormat -c aes-xts-plain64 -s 512 -h sha512 -y nvme0n1p4


Then inserted them as virtual devices with the following command:


1st partition:
Code:
sudo cryptsetup luksOpen nvme1n1p4 block1

2nd partition:
Code:
sudo cryptsetup luksOpen nvme0n1p4 block2

I then formatted both ext4.

Code:
mkfs.ext4 /dev/mapper/block1
mkfs.ext4 /dev/mapper/block2

Finally, I created a ZFS pool (RAID1).

Code:
pool create -f -o ashift=12 data mirror /dev/mapper/block1 /dev/mapper/block2

I now use this ZFS share in Proxmox for VMs and containers.

Is this configuration of ZFS ok?

I have read at Proxmox that the ZFS encryption is in experimtent status.

When booting, I would now automatically decrypt the two devices block1 and block2 using a key file (LUKS) so that ZFS can access them.
 
Hey!

First of all, thank you very much!

I have now carried out the installation accordingly, below is a screenshot of my partitions.

Code:
NAME            MAJ:MIN RM   SIZE RO TYPE  MOUNTPOINTS
sda               8:0    1     0B  0 disk
sr0              11:0    1  1024M  0 rom 
nvme1n1         259:0    0 953.9G  0 disk
├─nvme1n1p1     259:2    0   510M  0 part
├─nvme1n1p2     259:3    0     1G  0 part
│ └─md0           9:0    0     1G  0 raid1 /boot
├─nvme1n1p3     259:4    0  22.1G  0 part
│ └─md1           9:1    0  22.1G  0 raid1
│   └─md1_crypt 252:0    0    22G  0 crypt
│     ├─vg-swap 252:1    0     2G  0 lvm   [SWAP]
│     └─vg-os   252:2    0    20G  0 lvm   /
└─nvme1n1p4     259:5    0 930.3G  0 part
nvme0n1         259:1    0 953.9G  0 disk
├─nvme0n1p1     259:6    0   510M  0 part  /boot/efi
├─nvme0n1p2     259:7    0     1G  0 part
│ └─md0           9:0    0     1G  0 raid1 /boot
├─nvme0n1p3     259:8    0  22.1G  0 part
│ └─md1           9:1    0  22.1G  0 raid1
│   └─md1_crypt 252:0    0    22G  0 crypt
│     ├─vg-swap 252:1    0     2G  0 lvm   [SWAP]
│     └─vg-os   252:2    0    20G  0 lvm   /
└─nvme0n1p4     259:9    0 930.3G  0 part

I now have a RAID1 and an LVM.
The Proxmox OS and the SWAP are inside the LVM. The LVM is encrypted using LUKS. I am asked for the key when booting, Dropbear is already running and everything works :)

Excellent! I would have gone with different sizing and some spare space, but in case of any troubles copying out the content, rearranging just that LVM inside and copying it back is not such a chore either. But I'm glad no issues getting it up and running, I was a bit in rush dry running it here.

Now I have the two following partitions here:
nvme1n1p4 259:5 0 930.3G 0 part and nvme0n1p4 259:9 0 930.3G 0 part
These have not yet been mounted and are not in use.

I have now also encrypted these two partitions using LUKS.
Code:
sudo cryptsetup luksFormat -c aes-xts-plain64 -s 512 -h sha512 -y nvme1n1p4
 sudo cryptsetup luksFormat -c aes-xts-plain64 -s 512 -h sha512 -y nvme0n1p4


Then inserted them as virtual devices with the following command:


1st partition:
Code:
sudo cryptsetup luksOpen nvme1n1p4 block1

2nd partition:
Code:
sudo cryptsetup luksOpen nvme0n1p4 block2

That's all good, just don't get confused later with the numbering (it does not matter for functionality, you could even name it by UUIDs) and you should have them in crypttab by UUIDs or partlabels (which is what I prefer and use them).

I then formatted both ext4.

Code:
mkfs.ext4 /dev/mapper/block1
mkfs.ext4 /dev/mapper/block2

So no adverse effect here, but basically the way ZFS works it takes the block device and acts as partition manager + filesystem at the same time, so your ext4 got wiped in your next command. You may want to read up on ZFS separately, lots of things work differently (including mounts unless you go with legacy).

Finally, I created a ZFS pool (RAID1).

Even the naming conventions are different with ZFS, you made RAID1-like mirror. Confusingly, there's a RAIDZ sometimes called RAIDZ1 in ZFS which is RAID5-like. Just be careful with the conventions if e.g. seek advice later on the forum from ZFS focused people.

Code:
pool create -f -o ashift=12 data mirror /dev/mapper/block1 /dev/mapper/block2

I now use this ZFS share in Proxmox for VMs and containers.

Is this configuration of ZFS ok?

It is what you wanted, it is full-disk encrypted below the ZFS layer but it is giving you the benefit of a ZFS mirror. You can the go explore more what else the datasets and zvols can provide. You will probably like how you can make autosnaphots and all that, zfs send/receive, etc. Btw here again the naming conventions sometimes mean something else than e.g. what it does in LVM (which also can do mirrors or snapshots, those are meant short-term only).

I have read at Proxmox that the ZFS encryption is in experimtent status.

Well, what should I say? :) I think it is good enough, but it's like Debian stable vs testing and naming conventions. The thing with native ZFS encryption you benefit if you at the same time use e.g. deduplication (these are all really ZFS topics you can explore as you go and also experiment, don't just randomly turn it globally on). BUT the native ZFS encryption is per dataset only, e.g. it does not encrypt metadata. Again, you could benchmark this because obviously with 2 NVMe drives you have double the encryption going on with LUKS, ZFS native one would not have issue with that.

But this is the reason I left it as extra partition on GPT table, you can do whatever, you can make it another LVM (thin), you can have it ext4 on LUKS or BTRFS (might be interesting testing, will be also "experimental" as per Proxmox staff official stance though), you can even make it another md device and LUKS over RAID1 on an ordinary filesystem, it's 2 block devices anytime you need them.

When booting, I would now automatically decrypt the two devices block1 and block2 using a key file (LUKS) so that ZFS can access them.

So I hope that key is stored in the root partition and not boot one. :D

Yeah, one more thing - the ESP partition. I will admit last time I myself was using mdadm it was BIOS times. So it was easier just have GRUB in both MBRs. Now obviously, you cannot (as per my knowledge) nicely mdadm that EFI, but you could certainly clone it, just in case you had a disk failure and rebooting, it would be nice to have it still boot up. I found it beyond scope what you had been asking about though and it would need some experimenting. This is even different across systems, I remember it was possible in Fedora to have it appear as mdadm when booted, it was using superblocks at the end already then and you would just mkfs.fat on that partition so your EFI was happy. I think in Debian based systems you have to go around it by tweaking GRUB - you may want to post it as whole separe question as more experienced people with mdadm may chip in.

For me it's been mostly ZFS and lately even BTRFS, prefer LUKS below them, in fact full drives if it makes sense (cannot do SED). I still like LVM for modularity's sake. You can really play around with it and later tweak. Just be aware ZFS (or BTRFS) are conceptually different from "normal filesystems" and ZFS will be eating some RAM now too. You really have to go around using it and see for yourself.
 
Excellent! I would have gone with different sizing and some spare space, but in case of any troubles copying out the content, rearranging just that LVM inside and copying it back is not such a chore either. But I'm glad no issues getting it up and running, I was a bit in rush dry running it here.



That's all good, just don't get confused later with the numbering (it does not matter for functionality, you could even name it by UUIDs) and you should have them in crypttab by UUIDs or partlabels (which is what I prefer and use them).



So no adverse effect here, but basically the way ZFS works it takes the block device and acts as partition manager + filesystem at the same time, so your ext4 got wiped in your next command. You may want to read up on ZFS separately, lots of things work differently (including mounts unless you go with legacy).



Even the naming conventions are different with ZFS, you made RAID1-like mirror. Confusingly, there's a RAIDZ sometimes called RAIDZ1 in ZFS which is RAID5-like. Just be careful with the conventions if e.g. seek advice later on the forum from ZFS focused people.



It is what you wanted, it is full-disk encrypted below the ZFS layer but it is giving you the benefit of a ZFS mirror. You can the go explore more what else the datasets and zvols can provide. You will probably like how you can make autosnaphots and all that, zfs send/receive, etc. Btw here again the naming conventions sometimes mean something else than e.g. what it does in LVM (which also can do mirrors or snapshots, those are meant short-term only).



Well, what should I say? :) I think it is good enough, but it's like Debian stable vs testing and naming conventions. The thing with native ZFS encryption you benefit if you at the same time use e.g. deduplication (these are all really ZFS topics you can explore as you go and also experiment, don't just randomly turn it globally on). BUT the native ZFS encryption is per dataset only, e.g. it does not encrypt metadata. Again, you could benchmark this because obviously with 2 NVMe drives you have double the encryption going on with LUKS, ZFS native one would not have issue with that.

But this is the reason I left it as extra partition on GPT table, you can do whatever, you can make it another LVM (thin), you can have it ext4 on LUKS or BTRFS (might be interesting testing, will be also "experimental" as per Proxmox staff official stance though), you can even make it another md device and LUKS over RAID1 on an ordinary filesystem, it's 2 block devices anytime you need them.



So I hope that key is stored in the root partition and not boot one. :D

Yeah, one more thing - the ESP partition. I will admit last time I myself was using mdadm it was BIOS times. So it was easier just have GRUB in both MBRs. Now obviously, you cannot (as per my knowledge) nicely mdadm that EFI, but you could certainly clone it, just in case you had a disk failure and rebooting, it would be nice to have it still boot up. I found it beyond scope what you had been asking about though and it would need some experimenting. This is even different across systems, I remember it was possible in Fedora to have it appear as mdadm when booted, it was using superblocks at the end already then and you would just mkfs.fat on that partition so your EFI was happy. I think in Debian based systems you have to go around it by tweaking GRUB - you may want to post it as whole separe question as more experienced people with mdadm may chip in.

For me it's been mostly ZFS and lately even BTRFS, prefer LUKS below them, in fact full drives if it makes sense (cannot do SED). I still like LVM for modularity's sake. You can really play around with it and later tweak. Just be aware ZFS (or BTRFS) are conceptually different from "normal filesystems" and ZFS will be eating some RAM now too. You really have to go around using it and see for yourself.
Hey,

no, the key for the ZFS is then located on the RAID1 on the OS partition which must be decrypted during startup e.g. with dropbear works fine! ;)

UEFI/EFI
Yes, I have exactly the same two questions. Firstly, how do I get the EFI partition onto the second hard disk? It is there but not mounted. Do I have to do anything else? The server definitely has a UEFI.

And how would you do it with the ZFS on the Luks? In order for the ZFS to be loaded in Proxmox, the two Luks partitions must be decrypted with cryptsetup luksOpen when restarting.

Code:
sudo cryptsetup luksOpen nvme1n1p4 block1

How else would you mount/open the two partitions on restart/boot so that ZFS works?

Can I do the cryptsetup luksOpen without specifying the "mount point"?

Like this?
Code:
sudo cryptsetup luksOpen nvme1n1p4



Two more questions:
- Can I run the cryptsetup luksOpen (on boot) as described above using UUIDs and without specifying a mount point? Like this:
sudo cryptsetup luksOpen UUID1
Code:
sudo cryptsetup luksOpen UUID2

- Should I specify the UUIDs of the two partitions when creating the ZFS pool? (If so, how do I get them out?) Like this:
Code:
pool create -f -o ashift=12 data mirror UUID1 UUID2
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!