How to Install Boot RAID-1 Without proxmox-boot-tool/system.d?

My two possible pve hosts are incompatible with proxmox-boot-tool, resulting in boot RAID-1 corruption and failure to boot. I am not seeking to discuss the bug, I already spent 6 days with little sleep to be able do reproduce it 100% of the time.

Instead, I am seeking a pve installer option that allows me to install a boot RAID-1 without the use of proxmox-boot-tool. Unfortunately, the only two remaining RAID-1 options that the pve installer offers nowadays are ZFS and btrfs, both of which use proxmox-boot-tool. Potential solutions might include creating an md0 by hand first

* do there exist, potentially undocumented, installer options to create a boot RAID-1 from two ext4 drives, as used to be possible?
* if no such options exist anymore, can someone please let me know a manual procedure. I don't see a way to execute the necessary commands in the installer before it presents the user with the current standard list of boot options.

Thank you for any and all constructive options towards my goal that you can provide.

Best,
- Lucky
 
you could try to install debian 12 and then add proxmox afterwards. can't tell you if that adds the proxmox boot tool but you can always try

https://pve.proxmox.com/wiki/Install_Proxmox_VE_on_Debian_12_Bookworm
Appreciate the suggestion. I was hoping for another solution, that only requires the Proxmox installer. Alas, first installing Debian is the path that I will be taking now. There appears to be no way to get today's Proxmox installer to create an LVM boot RAID-1 out of two EXT4 or xfs drives anymore.

Thank you Proxmox for removing the only pve installation options that will not run into into a boot RAID-1 data corrupting bug that was first reported to you some 3 years ago and that I can reproduce with 100% reliability. If I wasn't so burned out having spent over 12h per day for over a week now trying to find a workaround to the bug, I would update that old, abandoned bug report. But I just don't have it in me anymore to spend another minute on that nightmare. The focus of my sanity has to be to find a way to bypass the bug, not add more proof that a 3 year old bug that doesn't show as fixed in the bug tracker, unsurprisingly, remains unfixed.
 
I might be wrong but I'm quite sure that ProxmoxVE never supported anything but ZFS or btrfs for software RAID:
https://pve.proxmox.com/wiki/Software_RAID
If you browse the pages history you will see, that they never supported mdadm and the support of SW Raid with ZFS or BTRFS was introduced years after the first versions of ProxmoxVE. So there wasn't anything to remove in the first place.
 
My two possible pve hosts are incompatible with proxmox-boot-tool, resulting in boot RAID-1 corruption and failure to boot. I am not seeking to discuss the bug, I already spent 6 days with little sleep to be able do reproduce it 100% of the time.

Instead, I am seeking a pve installer option that allows me to install a boot RAID-1 without the use of proxmox-boot-tool. Unfortunately, the only two remaining RAID-1 options that the pve installer offers nowadays are ZFS and btrfs, both of which use proxmox-boot-tool. Potential solutions might include creating an md0 by hand first

* do there exist, potentially undocumented, installer options to create a boot RAID-1 from two ext4 drives, as used to be possible?
* if no such options exist anymore, can someone please let me know a manual procedure. I don't see a way to execute the necessary commands in the installer before it presents the user with the current standard list of boot options.

Thank you for any and all constructive options towards my goal that you can provide.

Best,
- Lucky

Lucky:

What about your hosts makes them incompatible with proxmox-boot-tool? I am evaluating Proxmox PVE as a replacement for ESXi and would appreciate any insight or cautions you could throw my way.

In the past couple of weeks I've setup a 3-host cluster with ZFS RAID 1 on two boot devices; what sort of problems with this approach should I be looking for?

I will spin up a new virtual host and do some research to see if I can coax it to install on a software RAID with ext4...

EDIT:
I tried a new VM in VirtualBox with two hard drives. I booted into debug mode from the Proxmox ISO Advanced Options and was able to install mdadm and create a new mirror device before exiting the debug shell and proceeding with the Proxmox PVE installer. The installer only showed the two raw drives at /dev/sda and /dev/sdb as possible installation candidates, with the /dev/md0 device not shown. So no luck there...
 
Last edited:
I will spin up a new virtual host and do some research to see if I can coax it to install on a software RAID with ext4...
Most likely you will be fine since reports as in the OP are actually quite rare.

Please note that using mdadm + ext4 for Software RAID vs the Debian Route is Neither recommended not supported by the Proxmox developers and you will also loose other ZFS features:

LVM with ext4 or xfs is fine if you have HW Raid since ZFS and HW RAID don't play nice together.
 
Last edited:
  • Like
Reactions: UdoB