As far as I know you always make use of the proxmox-boot-tool. No matter what type of storage you choose and no matter if it is grub or systemd boot once you install PVE from a PVE 6.4+ ISO.
You can't use mdadm raid when installing PVE via the PVE ISO. So you would be forced to install a Debian with its Debian Bootloaders and then of cause such PVE features like the proxmox-boot-tool wouldn't be used.
Right. That is what I wrote. Debian 12.2 with update to PVE. You have to do some special settings but that's all about. After removing the debian kernel you can use your pve kernel with mdadm.
By the way: I do not recommend to use any md-raid on proxmox - but you even can update your pve from an ISO-install with mdadm.
In this case you need the same amount of hdds/ssds like you are using after standard pve-iso.
Anyways. My point was not to use mdadm for fun or why I "think" its better then ZFS (its not). But if your hardware is not the right one for ZFS it will be expensive.
Losing any 12$ SSD is not losing any NVMe.
I prefer to use ZFS for PVE with the right hardware of course. But if you dont have the right hardware and dont want to install 2 additional garbage ssd (or industrial type) and you just need to use your system as it is (a soho system) then md-raid over debian with upate to pve isna great "middle way".
You mention recommendations for quite a few things. who are you quoting as doing the recommending?
Depends on your content.
If you are answering a thread and you did not get the topic during the conversation it can happen.
I did a straight recommendation.
Dont use ZFS on soho hardware. As SSD it should be SLC or at least MLC. But better use a hdd for it.
If you are forced to use soho hardware then dont use ZFS with full logging.
Anyways while some flash storges will allow you 10.000 writing operations each cell a SLC will have 100.000 to 1.000.000 write operations.
Just one Intel 256GB SLC SSD will have a price about 150-200$.
And the guests? If mounted to a ZFS Raid1 (2x 2TB Samsung) you will have to replace those soon. Even with ZFS logging to minimum.
Just saying, while there is a high need on RAID1 and more and more advanced home users are using a PVE they will run in that problem using the storage wrong!
And even if there is a risk with md-raid (read about the synch problem on this board or on google) it is s nice middle way.
You can use your soho-hardware with md-raid on proxmox until the day you setup everything a better way. (e.g. a small HA-SAN for iSCSI for all pve-systems in your network @10gbe or just clustering with the right hardware in your pve system.
Anyways:
1) You can use ZFS Raid1 with soho-hardware, but its not the right way.
2) You can use md-raid with your pve and it will never be the right way - but your hardware will survive much longer.
3) You can buy the right hardware for all guests to use ZFS, but you have to spent a lot of money.
4) Best is a dedicated SAN (HA at least n+1 and a hardware RAID6 or RAID10 depends on what you need) for serving iSCSI to all pve, esxi or whatever virtualization you like to use. That would be of course the "high roller class" including managed switches (stacked at least n+1 with Jumbo frame option) - you will have no matter with ZFS ir whatever filesystem you want to use. Your iSCSI Lun is your "dedicated hdd" and after mapping you can do everything with this "storage" as it is dedicated installed.
Gave you all options now for no more irritations. My english is not best, but I wont use ChatGPT for translations without an emergency. I am always trying to get better in it - whatever