[TUTORIAL] EXPERIMENTAL HOWTO: Proxmox 9.1 on Debian 13 with full* BTRFS RAID1, Secure Boot, LUKS root, and single password remote unlock (TPM coming soon...?)

ToServeMan

New Member
Jan 25, 2026
5
2
3
This started as a personal project, but I decided to turn it into a guide so no human being ever, EVER has to go through this discovery process again. There are some other guides that do some of this, but I didn't find any that do all of it. Along with many hours of trial and error, I've stolen from so many sources for this guide that I've lost track of them. My thanks and apologies to the giants whose shoulders I've stood on.

DISCLAIMER - READ THIS

This guide has rough edges. I'm sure it has some redundancies and could be made much more efficient. The setup has some pretty important limitations. I strongly recommended reading the entire guide before deciding that you want to do it, especially the section on recovering a RAID. It's not trivial and will probably require you to take your system offline.

This is an unofficial, highly experimental setup. Now or in the future, it may destroy your Proxmox install, your VMs, your critical data, and possibly your very soul. I just started using Proxmox last week and I honestly have no idea on some of the long term implications of this. This is NOT production ready. By following this guide, you agree not to sue me if something goes horribly wrong. I doubt I need to say this, but this process will erase any data on your drives.

This guide assumes you have a general working knowledge of Linux. You should be at least somewhat familiar with manual drive partitioning, BTRFS, RAID levels, LUKS, secure boot, initramfs, static IP VS DHCP, SSH, and similar concepts. I'll try to explain things as I go but this is aimed at intermediate to advanced users. It's assumed that you're capable of recognizing when a command needs to be adapted to your own setup (drive paths, for example). Support will be provided on an "I may get around to it eventually" basis.

KNOWN LIMITATIONS

Not true full disk encryption because the boot partition will be unencrypted (typical for LUKS root, still pretty good).
Haven't gotten TPM working yet.
Recovering a degraded RAID is a bear.

GOAL

By the end of this guide you will have:
Proxmox
Full data redundancy* on all partitions
Encrypted root partition (but not boot partition, so not "real" FDE)
Secure boot (no TPM unlock... yet)
Remote drive unlocking (with one password)
Emergency recovery keys for LUKS (optional)

PROCESS OUTLINE

Install Debian
... with a custom encrypted partition scheme
Get Remote Access
Install Proxmox
Configure Redundancy
Enable Secure Boot
Optional Extras

REQUIREMENTS

Debian 13 boot media
2 boot drives (preferably identical)
Relatively recent hardware (for TPM)
Physical (or OOB) access to the machine you're working on

INSTALLING DEBIAN - Part 1

I'd recommend starting with Secure Boot disabled. I've have a few weird issues when it isn't. We'll enable it again later on. It's hard to advise specific steps on this because every motherboard is different. If you can't find your secure boot settings in BIOS, try creating an admin password and check again. If that doesn't work, also try creating a user password. Sometimes this will make the secure boot settings reveal themselves.

Start up the Debian installer in Expert Mode. Proceed as normal until you get to partitioning. For this to work, we'll need to do some manual partitioning.

DRIVE PARTITIONING

This is not the only partition scheme that would work, but it's the one that worked for me. The guide assumes you're using this. For BTRFS mount options, I chose these.

noatime - Depends who you ask, but...
ssd - You're using one, right?
discard - This may have security implications but I personally think they're pretty low on the threat list in the grand scheme of things. https://wiki.archlinux.org/title/Dm...ard/TRIM_support_for_solid_state_drives_(SSD)
compression - The Debian default seems fine, but you may have boot issues if you change the level/algo.

Now for the actual partitioning. Clear the first partition table and select GPT for your new type. Lay out your first drive as follows.

P1: 1GB, EFI system partition
P2: 1GB, BTRFS, /boot mount point
P3: remaining space, physical volume for encryption, disable erase data flag if using SSD

Partition the second drive identically, but this time set partition 2 mount point to none. Once that's done, select Configure Encrypted Volumes and Finish. To set them up you'll need to set your new boot decryption password. The passwords for both drives should be identical. Once you do that you'll see two new entries in the partitioner. These are your decrypted root partitions. Set them both to BTRFS with the previously mentioned mount options. The first one should mount to /, the second should have an empty mount point.

Write partitions to disk. You'll get some warnings about two of the partitions not having mount points, and about not having swap set up. These are expected and can be safely ignored. At this point, you're operating exclusively off the first drive. Debian's installer can't automatically handle the redundancy for us, so we'll set it up manually in a later step.

INSTALLING DEBIAN - Part 2

Proceed through the rest of the Debian install as normal. Install GRUB as your bootloader, not systemd. The only other thing that tripped me up is that if you don't select the network install for packages, the Debian repos won't be automatically set up. The rest of the options should either have safe defaults or be reasonably self explanatory.

You may wish to set up an SSH key before you hit the finish button. /target is your virtual root, so put your public key in /target/root/.ssh and chmod it 600. You can also do this after reboot if you prefer.

After reboot, you'll need physical access to unlock your root partition. Once we're in, that's the first thing we'll fix.

INSTALL DROPBEAR

Next we're going to set up dropbear. Dropbear is a mini SSH server that can be put into your initramfs, so you can connect through SSH to unlock your LUKS root remotely. First, from a CLIENT computer, create an SSH key. Note that ed25519 type keys (the default for newer ssh-keygen) WILL NOT WORK. You'll need either ecdsa or rsa. If your usual key is already one of those types, you can just use that.

ssh-keygen -t ecdsa

Now, moving back to the Proxmox server. Update everything through apt if you haven't, then grab dropbear.

apt update
apt upgrade
apt install dropbear-initramfs

You'll see a warning about an invalid key file. This is basically just warning that there isn't a public key installed to dropbear. Let's fix that. Get your RSA or ECDSA public key to the following location, either through scp or copy+paste or whatever.

nano /etc/dropbear/initramfs/authorized_keys

Then run these setup commands.

echo 'echo "IP=192.168.1.101::192.168.1.1:255.255.255.0::eno1:off" > /etc/initramfs-tools/conf.d/dropbear-networking' > /etc/initramfs-tools/hooks/dropbear-networking-hook
OR
echo 'echo "IP=::::$(hostname)-initramfs:eno1:dhcp:::" > /etc/initramfs-tools/conf.d/dropbear-networking' > /etc/initramfs-tools/hooks/dropbear-networking-hook

sed -i 's/^#*DROPBEAR_OPTIONS=.*/DROPBEAR_OPTIONS="-p 22 -I 600 -j -k -s -c cryptroot-unlock"/' /etc/dropbear/initramfs/dropbear.conf
chmod 600 /etc/dropbear/initramfs/authorized_keys
chmod +x /etc/initramfs-tools/hooks/dropbear-*
update-initramfs -u
update-grub

(For some reason, I've sometimes had to run this set of commands twice or dropbear won't pick up network on boot. Not re-run after reboot, just run twice. I think it has something to do with the initramfs hook but for the life of me I can't figure out why. I have a feeling a clever commenter will spot some obvious problem that I missed...)

Let's look at a couple of these less obvious commands and see what they do.

# echo 'echo "IP=::::$(hostname)-initramfs:eno1:dhcp:::" > /etc/initramfs-tools/conf.d/dropbear-networking' > /etc/initramfs-tools/hooks/dropbear-networking-hook

This command creates an initramfs hook to configure dropbear's networking. The first version is for static, the other is for DHCP. I like to live dangerously, so I use the DHCP version. Works fine. In either case, you will need to replace eno1 with the name of your desired interface if you have multiple NICs. If you only have one NIC, the eno1 can be removed (untested).

For DHCP specifically, you can also set the hostname. Technically this can be anything. My regular hostname is "pve", so for initramfs this will set it to "pve-initramfs". I made it different from the main hostname because if you use a different key for regular SSH than you do for dropbear, your SSH client will complain about a key mismatch. This way I don't have to mess with that. This is set up as an initramfs hook in case the hostname ever changes. In that case, just regenerate your initfs and it will automatically adapt to the new one.

# sed -i 's/^#*DROPBEAR_OPTIONS=.*/DROPBEAR_OPTIONS="-p 22 -I 600 -j -k -s -c cryptroot-unlock"/' /etc/dropbear/initramfs/dropbear.conf

This sets your actual dropbear options.

-p 22 is the standard SSH port of 22. I don't think there's much point in port obfuscation on something like this. It doesn't conflict with the regular OpenSSH port once the system is booted.
-I is a timeout (in seconds) after which dropbear will disconnect. This is optional and I gave it a pretty generous timer.
-j disables local port forwarding for security.
-k disables remote port forwarding for security.
-s restricts logins to key only, no password.
-c cryptroot-unlock restricts the shell to automatically unlocking crypt devices.

(One other caveat. Sometimes I have to connect to dropbear twice because the first one can't find a route. Running it again immediately works. It has nothing to do with how long I've waited for it to start. I suspect it's DHCP related. Don't know why dropbear keeps wanting me to do things twice...)

At this point, reboot to check your work. If all is well and you can SSH in to unlock your crypt volume and boot normally, proceed to the next step.

INSTALL PROXMOX

Now it's time to convert your regular old Debian install into Proxmox. The official documentation for Proxmox installation is quite good and straightforward, so I won't rehash it here. No modifications are necessary, or at least none that I've found so far.

https://pve.proxmox.com/wiki/Install_Proxmox_VE_on_Debian_13_Trixie

(This will take several posts. Please hold all comments until the end...)
 
(... continued from previous post.)

CREATING MIRRORS

Once that's working, it's time to set up your partition mirrors. For all of these commands, obviously replace your partition paths as necessary.

Verify your active mount points for EFI, boot, and cryptroot.

mount | grep nvme

Sometimes I've seen that the partitions will be showing as mounted from the wrong source device, that is, 1 rather than 0. I'm not sure why this happens. I think maybe the BIOS may not always present the disks in a deterministic order. A reboot usually fixes it. Once RAID is set up it won't be a problem.

If that's square, let's proceed.

Mirroring Root

These commands open the root LUKS volume on the second drive, add it to the device pool for root, then set it to RAID1. Then verify the status.

cryptsetup luksOpen /dev/nvme1n1p3 nvme1n1p3_crypt

btrfs device add /dev/mapper/nvme1n1p3_crypt / -f
btrfs balance start -dconvert=raid1 -mconvert=raid1 /

btrfs balance status -v /
btrfs filesystem show /
btrfs filesystem usage /

For the status command, you should see "no balance found", which means it's done. Then you can look over the RAID to make sure everything looks right.

Now we'll do the same for the boot partition, but we don't need to run cryptsetup here since boot isn't encrypted.

Mirroring Boot

btrfs device add /dev/nvme1n1p2 /boot -f
btrfs balance start -dconvert=raid1 -mconvert=raid1 /boot

btrfs balance status -v /boot
btrfs filesystem show /boot
btrfs filesystem usage /boot

Mirroring EFI (sort of)

We can't live mirror an EFI partition directly. Fortunately, Proxmox has us covered and lets us do a sort of offline mirror. proxmox-boot-tool hooks into initramfs generation to sync the contents of EFI partitions. All we need to do is add both EFI partitions to the configuration.

umount /boot/efi
proxmox-boot-tool init /dev/nvme0n1p1 grub
proxmox-boot-tool init /dev/nvme1n1p1 grub
proxmox-boot-tool refresh

proxmox-boot-tool status

There are a couple more things we need to do before rebooting. We need to make a some manual changes to fstab and crypttab so the root partitions mount correctly.

Let's get the UUID of the mapped LUKS volume.

blkid -o value -s UUID /dev/mapper/nvme0n1p3_crypt

We need to update /etc/fstab to use the UUID rather than the drive path. Since BTRFS RAID1 gives both mirrored partitions an identical UUID, mounting by UUID will allow both partitions to mount at once.

nano /etc/fstab

Remove the LUKS device path, and replace it with UUID=xxx where xxx is the UUID you just got.

Next, we'll need to add the second drive to /etc/crypttab. Get the UUID of the second physical root partition.

blkid -o value -s UUID /dev/nvme1n1p3

Get into /etc/crypttab.

nano /etc/crypttab

Add a second line that's a copy of the first. For that copy, change the first field to match the second LUKS device rather than the first one, and change the UUID to the one we got earlier for the physical root partition on the second drive. Then update initramfs and grub.

At the end of each entry, we need to add another option. This will make it so we only have to enter the key once to unlock both drives. Please read this for potential security concerns. https://github.com/gebi/keyctl_keyscript

keyscript=decrypt_keyctl

We'll create a small initramfs entry to clear the cached LUKS key after use. (I'm somewhat taking a guess on the best place to run this one. It doesn't break anything but I'm not exactly sure how to verify that it works as advertised.) Then, rebuild boot initramfs.

echo "keyctl clear @u" > /etc/initramfs-tools/scripts/init-bottom/clear-luks-key

update-initramfs -u
proxmox-boot-tool refresh

Reboot to test. You may notice dropbear only shows one partition being unlocked. This is an interface limitation of keyctl. If you check the physical console, you can see the other one being unlocked with the cached key. But let's check the status of the RAID just for fun.

btrfs filesystem usage /
btrfs filesystem usage /root

[/b]OPTIONAL EXTRAS[/b]

Let's set up some LUKS recovery keys. This is mandatory if you want to try using TPM, but optional otherwise. If you're using TPM unlocking and something goes wrong with your secure boot, this may well be the only way to get at your data. For non-TPM, it's merely nice to have. With this setup, each drive will have a different recovery key. Alternatively, you can manually generate a key and set it up on both. Your call.

systemd-cryptenroll /dev/nvme0n1p3 --recovery-key
systemd-cryptenroll /dev/nvme1n1p3 --recovery-key

These keys can be used to unlock your encrypted root, so protect them accordingly.

You can optionally harden your system against brute force attacks on your LUKS key by increasing your iterations. The Debian installer defaults to (I believe) 1000ms. This is fine for most uses, but increasing it can give you a little extra protection. I tried setting it to 20 seconds (massively overkill) just to see what would happen. Dropbear timed out but the drives eventually unlocked just fine. For general usage I would probably not recommend higher than 5 seconds as it seems to confuse some tools. This example uses 3 seconds.

cryptsetup luksChangeKey /dev/nvme0n1p3 --iter-time 3000
cryptsetup luksChangeKey /dev/nvme1n1p3 --iter-time 3000

If everything looks good, enable secure boot in your BIOS. Note that some uses of Proxmox are not compatible with secure boot. I'm not yet very familiar with these. Secure boot without TPM is limited but you might as well use it unless it's causing problems. I haven't had any issues so far, but I'm not doing anything particularly unusual (the irony).

You're done! You now have root encryption, full redundancy, and remote unlocking. You can now configure Proxmox as normal. This setup is probably fine for 99% of people. Because I'm a bit of a nut, I also tried to get TPM working, but so far I haven't been successful. If you're interested in the attempt, read on.

The following section SHOULD NOT BE FOLLOWED except for research purposes. It's composed entirely of things that DON'T work in hopes that somebody smarter than me might read it and know the answer. Everything here is HEAVILY theorycrafted and should be taken with a giant, heaping bowl of salt. YOU HAVE BEEN WARNED.

Why would we even want TPM? Well, a few reasons. First, it gives us much more protection against brute force attacks against the LUKS password, especially for shorter ones. Second, it (potentially) protects against initramfs tampering, which is probably the single biggest threat to encrypted root. Compared to LUKS only, this offers much stronger protection against an attacker with physical access.

In an ideal setup, what I would want to see is initramfs asking for a TPM PIN and then using it to unlock all relevant drive, and taking a recovery key as an alternative. As mentioned, I did not achieve this.

TPM can offer "secure" passwordless boots to an encrypted root, which is probably the most common use, but I wouldn't recommend this even if I had it working. There was a fairly troubling article about a year ago on how Linux generally doesn't implement initramfs protection correctly when going passwordless. The author recommends PIN as a defense.
https://oddlama.org/blog/bypassing-disk-encryption-with-tpm2-unlock/

I fully intended to include a working TPM setup as part of this guide. I've learned far more about it in the past few days than I ever wanted to. After many hours of work I've determined that I've reached the limits of either my skills, my tools, or both. I've gotten TPM root unlock working before on Arch with a systemd bootloader, which leads me to believe that the problem is with initramfs and/or GRUB.

We'll need a couple additional packages for this.

apt install clevis clevis-luks clevis-tpm2 clevis-initramfs tpm2-tools

Clevis automatically hooks into initramfs when it rebuilds to enable TPM boots. We will not be using it to actually set up the keys. Read on for why.

Make sure you have recovery keys set up, then set up your TPM+PIN keys. (PIN is misleading. It can and should be a complex password.)

systemd-cryptenroll --tpm2-device=auto --tpm2-pcrs=0,7,9,14 --tpm2-with-pin=yes /dev/nvme0n1p3
systemd-cryptenroll --tpm2-device=auto --tpm2-pcrs=0,7,9,14 --tpm2-with-pin=yes /dev/nvme1n1p3

(On older hardware, you may need to change sha256 to sha1.)

So what's the deal with PCRs? What do the numbers mean? Well, to massively oversimplify, these settings tell secure boot what parts of the system it should check for tampering and decide whether to raise the alarm and fail the boot. Your decision of which ones to use greatly affects your level of security. I've seen a lot of different guides make wildly different recommendations here, most of them with absolutely no explanation for their choices. I'll do my best to explain why I've chosen these. They are untested and may be completely wrong.

0 - Audits firmware. Unseal this before you upgrade your BIOS, I think?
7 - Default.
9 - Based on what I've read, I believe the importance of PCR 9 is wildly underestimated for protecting initramfs. However I think it may cause problems with updates, requiring manual resealing. https://blog.securityinnovation.com/preventing-initramfs-attacks-tpm
14 - Recommended by the man page. Works with the shim. Probably won't work with custom keys.

I've tried to provide sane recommendations that make a reasonable tradeoff between security and convenience. However, TPM is complex and there's not necessarily a one size fits all solution here. The man page for systemd-cryptenroll has a pretty good summary of what common ones do and some recommended settings, plus another good article I found.
https://dyn.manpages.debian.org/trixie/systemd-cryptsetup/systemd-cryptenroll.1.en.html
https://fedoramagazine.org/automatically-decrypt-your-disk-using-tpm2/

After all this setup, it doesn't look like initramfs is even trying to access the keys. I see a reference in journalctl to it failing an integrity check, but based on the timestamp this was generated by the main system AFTER initramfs had finished. Not at the beginning of initramfs booting, as I would expect if it was a failure of initramfs to access TPM. I start seeing this VERY early in the configuration process, before any TPM LUKS keys are set up. I also see it with or without secure boot enabled. It LOOKS like it might be choking on one of the PRCs, but I'm not sure I understand which or why. Also I see similar (but not identical) logs on my Arch system, which boots TPM LUKS just fine.

Feb 01 19:48:48 proxmox kernel: tpm_crb MSFT0101:00: Disabling hwrng
Feb 01 19:48:48 proxmox systemd[1]: systemd-tpm2-setup-early.service - Early TPM SRK Setup was skipped because of an unmet condition check (ConditionSecurity=measured-uki).
Feb 01 19:48:48 proxmox systemd[1]: systemd-tpm2-setup.service - TPM SRK Setup was skipped because of an unmet condition check (ConditionSecurity=measured-uki).
Feb 01 19:48:48 proxmox systemd[1]: Reached target tpm2.target - Trusted Platform Module.
All of this leads me to believe that there's some trick for directing initramfs to try TPM keys, but I haven't been able to find it.

Could we use Clevis? Short answer: Not really.

Long answer: It works, kind of, in the sense that it boots and opens LUKS root without a password... but it is NOT secure, and therefore I will not be including specific commands. If you really want them they're not hard to find. Clevis doesn't work the way it appears at first glance. It doesn't install a TPM unlockable key to a LUKS volume. Also, Clevis documentation uses the term "pin" in an idiosyncratic and potentially misleading way (which could still be technically correct, I'm not sure). When I think "LUKS+TPM+PIN", I think "LUKS had a keyslot bound to the TPM, which requires me to enter a passphrase on boot that then releases the key from the TPM and opens the LUKS volume". So, like Bitlocker does it. In fact, if you bind Clevis to a LUKS partition and dump the header, you'll see that it actually creates a LUKS key that is NOT TPM based. Further, official Clevis documentation openly states that binding to local TPM shouldn't be considered secure. From the man page: https://man.archlinux.org/man/clevis-encrypt-tpm2.1

"The Clevis security model relies in the fact that an attacker will not be able to access both the encrypted data and the decryption key.

For most Clevis pins, the decryption key is not locally stored, so the decryption policy is only satisfied if the decryption key can be remotely accessed. It could for example be stored in a remote server or in a hardware authentication device that has to be plugged into the machine.

The tpm2 pin is different in this regard, since a key is wrapped by a TPM2 chip that is always present in the machine. This does not mean that there are not use cases for this pin, but it is important to understand the fact that an attacker that has access to both the encrypted data and the local TPM2 chip will be able to decrypt the data."

How hard would this be to actually pull off? I'm not entirely sure. Based on the Oddlama blog entry above it sounds pretty doable, and he mentions Clevis specifically being vulnerable.

If you REALLY hate entering a password on boot, this would probably deter a casual attacker. I still believe it would be strictly inferior to regular LUKS+password against a skilled attacker. At least in that case you actually have to enter the password for it to be stolen...

That said, Clevis DOES have an advantage in that it (allegedly) hooks into initramfs regeneration to reseal your PCRs automatically, which may be necessary if some of them are enabled. So it would probably be useful even without the specific bindings, if you could get regular LUKS+TPM+PIN working.

I tried dracut but wasn't able to get it to work. See the FAQ for more into.

Some guides suggest putting tpm2-device=auto as a parameter in /etc/crypttab. initramfs doesn't recognize it. Some distros list this option in the man pages, Debian is not one of them.

I also tried a bunch of different suggest entries in /etc/initramfs-tools/modules. No dice so far.

Bottom line, I don't know quite what to do to get TPM working. Kowalski, options?

(Okay, one more...)
 
(... last one, I swear!)

RESTORING A DEGRADED RAID

They say nobody wants backups, they only want restores. What should you do if you lose a drive? How can we restore RAID? Well... it's a bit of a job.

To find out, I simulated a drive failure by dd wiping the first handful of sectors on one of the drives I'm using. Here's how I fixed it.

The mirroring, in and of itself, does work. For the most part, these steps are the same as you would use for any similar partition and RAID scheme, but here's why I keep putting an asterisk next to RAID...

This setup will not boot to a degraded RAID.

When initramfs fails to decrypt one of the drives, it freaks out and drops to emergency shell. You can add the "degraded" and "nofail" flags to /etc/fstab, but it won't help. The problem is initramfs. Debian initramfs does not recognize the "nofail" option in /etc/crypttab. This option shows up in the /etc/crypttab man pages for other distros, but not Debian. If you try building it with this option, initramfs will show a warning that it's not a valid option. If it worked, this would allow the system to boot even if initramfs fails to mount a crypt device. I haven't found a way to do this yet on Debian (dracut?).

So you're going to need physical access to your machine for this. Which you'll need anyway to replace the drive. We're going to boot from Debian live media in rescue mode. Make sure the hostname you set matches the usual one for this system, because initramfs and dropbear will rebuild from it if you've been following along. Otherwise, proceed normally until you unlock your crypt root drive, then hit Ctrl-Alt-F2 to drop to a shell.

The following steps assume that drive 0 is good and drive 1 is bad. Adapt accordingly and be VERY careful. A typo in these could result it total data loss.

Copy the partition table from the working drive to the new one and randomize the IDs.

sfdisk --dump /dev/nvme0n1 > part_table.dump
sfdisk /dev/nvme1n1 < part_table.dump

sfdisk --disk-id /dev/nvme1n1 $(cat /proc/sys/kernel/random/uuid)
sfdisk --part-uuid /dev/nvme1n1 1 $(cat /proc/sys/kernel/random/uuid)
sfdisk --part-uuid /dev/nvme1n1 2 $(cat /proc/sys/kernel/random/uuid)
sfdisk --part-uuid /dev/nvme1n1 3 $(cat /proc/sys/kernel/random/uuid)

Format and open a secondary crypt root.

cryptsetup luksFormat /dev/nvme1n1p3
cryptsetup luksOpen /dev/nvme1n1p3 nvme1n1p3_crypt

Format new parititions. We'll deal with EFI later.

mkfs.btrfs -f /dev/nvme1n1p2
mkfs.btrfs /dev/mapper/nvme1n1p3_crypt

Mount the good crypt root. The "degraded" option lets you mount a broken RAID. (You may have read some warnings around the internet that a degraded BTRFS RAID1 can only be mounted safely once. This is no longer true.)

mount -o noatime,ssd,discard,compress,degraded,subvol=@rootfs /dev/mapper/nvme0n1p3_crypt /mnt
btrfs filesystem show /mnt

We can see that device 2 is missing, so let's replace it with the new one.

btrfs replace start 2 /dev/mapper/nvme1n1p3_crypt /mnt -f
btrfs replace status /mnt
btrfs balance start -dconvert=raid1 -mconvert=raid1 /mnt

btrfs balance status -v /mnt
btrfs filesystem usage /mnt

Now let's do the same for /boot.

mount -o noatime,ssd,discard,comrpession,degraded /dev/mapper/nvme0n1p2 /mnt/boot

btrfs filesystem show /mnt/boot

btrfs replace start 2 /dev/nvme1n1p2 /mnt/boot -f
btrfs replace status /mnt/boot/
btrfs balance start -dconvert=raid1 -mconvert=raid1 /mnt/boot

Again, don't do anything with EFI at this point. I will explain shortly.

Replace the old /etc/crypttab entry with the UUID of the new drive physical parition.

blkid -o value -s UUID /dev/nvme1n1p3

nano /mnt/etc/crypttab

chroot into the system and regenrate initramfs.

for name in proc sys dev ; do mount --bind /$name /mnt/$name; done
chroot /mnt

update-initramfs -u

You will probably see an error from proxmox-boot-tool. This is expected because we haven't cleared the old EFI partition or set up the new one. I tried remirroring the EFI partition from chroot, but it always says the partition isn't properly formatted when I try to run proxmox-boot-tool init. Even when proxmox-boot-tool just formatted it. But don't worry, we still have the one on the good drive. We now have enough to boot into the main system. Reboot and your drives should unlock fine and boot normally.

Once booted, we'll fix EFI. Remove the bad entry from the relevant proxmox config file, initialize the new EFI, and refresh grub.

nano /mnt/etc/kernel/proxmox-boot-uuids

proxmox-boot-tool /dev/nvme1n1p1
proxmox-boot-tool refresh

Verify the mirrors.

btrfs filesystem usage /mnt
btrfs filesystem usage /mnt/boot

You're done! Your mirrors have been restored.

FAQ

Q: Does this setup actually work?
A: So far, yes! ... except for booting from a degraded RAID. And TPM. This setup mostly works. I haven't done heavy testing of the boot security, but the basics definitely work. I've installed VMs on it and used HBA passthrough successfully.

Q: Could this unconventional, unofficial, and entirely unsupported configuration have unforeseen side effects in the future?
A: Potentially. I only started using Proxmox recently, so I'm not really sure if it could cause problems with major upgrades, etc. Like I said, consider it experimental.

Q: Can I use BTRFS RAID5/6?
A: No. The BTRFS devs themselves strongly discourage the use of BTRFS RAID5 for data integrity reasons.

Q: BTRFS boot partitions? Are you nuts!?
A: I believe both my friends and enemies would say yes. Seriously though, I explored several options here, and in the end I really think BTRFS RAID1 is most functional and easiest to set up. You COULD probably use LVM/mdadm for redundancy, but if we're using BTRFS anyway...

Q: Why not ZFS?
A: Great question! Couple reasons. The Debian installer can't handle ZFS without a LOT of manual fiddling. Converting the Proxmox appliance wouldn't work well either. It uses systemd as a bootloader when using a ZFS root, and according to official Proxmox documentation, systemd bootloader doesn't play nice with secure boot. Finally, I am simply much more personally familiar with LUKS+BTRFS root than I am with encrypted ZFS root.

Q: No swap?
A: Well, the official Proxmox appliance doesn't use one either... You could probably set up mirrored encrypted swap with mdadm if you really wanted to, but I think it's more trouble than it's worth. I've also read that it's possible to create a swap file within a BTRFS volume, but I haven't tried it. If you really need swap I'd probably look into that. I know RAM prices are through the roof right now, but I feel like if your hypervisor is swapping, something has gone horribly wrong.

Q: How secure is this?
A: It's more than enough to keep out the vast majority of attackers, but I don't guarantee it against TLAs. I'm not a security specialist, I'm just some guy. It's very possible there are configuration issues here I haven't thought of, which I'm sure will be graciously pointed out by commentators. I think the most productive type of attack against something like this would probably be installing a compromised initramfs that includes a keylogger for the LUKS password. In the absence of TPM, verity might be useful to protect the boot partition (or at least make it tamper evident), but I don't have much experience with it. Still, this type of setup is probably the best you can do as a private individual. If you're facing an adversary who can defeat this, I suspect you have bigger problems.

Q: What about dracut? Could that solve the problem with booting from a broken RAID, and possibly TPM as well?
A: Maybe... if I could get the stupid thing working. Even switching to dracut quite early in the setup, I wasn't able to get a dracut generated initramfs to boot at all. It always timed out when trying to load initramfs. I'll be honest, this was one of the last things I tried before finalizing this post and I was kind of at the end of my rope at that point. I didn't spend a lot of time on it. I may take another swing at dracut eventually. I do believe that, combined with certain mount options, dracut would allow the system to boot from a degraded RAID. I'm not sure about TPM though. Seems like most dracut guides use Clevis to bind the key, anyway. I will also note that the Debian repo does not include the SSH module for dracut, though it could be installed manually.
https://www.thomasweigold.de/blog/2025/configure_dracut_network_ssh/

CLOSING

I hope this guide helped or inspired somebody out there. I hope to improve it in the future, but at this point I want to get some feedback before doing anything else. Plus at some point I do start have to using this server as, you know, an actual server. I accept all commentary, including suggestions, constructive criticism, and especially corrections.

Enjoy your encrypted RAID!

"We do these things not because they are easy, but because we thought they would be easy." - John F. Kennedy, 1962 (probably)