Excellent! I would have gone with different sizing and some spare space, but in case of any troubles copying out the content, rearranging just that LVM inside and copying it back is not such a chore either. But I'm glad no issues getting it up and running, I was a bit in rush dry running it here.
That's all good, just don't get confused later with the numbering (it does not matter for functionality, you could even name it by UUIDs) and you should have them in crypttab by UUIDs or partlabels (which is what I prefer and use them).
So no adverse effect here, but basically the way ZFS works it takes the block device and acts as partition manager + filesystem at the same time, so your ext4 got wiped in your next command. You may want to read up on ZFS separately, lots of things work differently (including mounts unless you go with legacy).
Even the naming conventions are different with ZFS, you made RAID1-like
mirror. Confusingly, there's a RAIDZ sometimes called RAIDZ1 in ZFS which is RAID5-like. Just be careful with the conventions if e.g. seek advice later on the forum from ZFS focused people.
It is what you wanted, it is full-disk encrypted below the ZFS layer but it is giving you the benefit of a ZFS mirror. You can the go explore more what else the datasets and zvols can provide. You will probably like how you can make autosnaphots and all that, zfs send/receive, etc. Btw here again the naming conventions sometimes mean something else than e.g. what it does in LVM (which also can do mirrors or snapshots, those are meant short-term only).
Well, what should I say?
I think it is good enough, but it's like Debian stable vs testing and naming conventions. The thing with native ZFS encryption you benefit if you at the same time use e.g. deduplication (these are all really ZFS topics you can explore as you go and also experiment, don't just randomly turn it globally on). BUT the native ZFS encryption is per dataset only, e.g. it does not encrypt metadata. Again, you could benchmark this because obviously with 2 NVMe drives you have double the encryption going on with LUKS, ZFS native one would not have issue with that.
But this is the reason I left it as extra partition on GPT table, you can do whatever, you can make it another LVM (thin), you can have it ext4 on LUKS or BTRFS (might be interesting testing, will be also "experimental" as per Proxmox staff official stance though), you can even make it another md device and LUKS over RAID1 on an ordinary filesystem, it's 2 block devices anytime you need them.
So I hope that key is stored in the root partition and not boot one.
Yeah, one more thing - the ESP partition. I will admit last time I myself was using mdadm it was BIOS times. So it was easier just have GRUB in both MBRs. Now obviously, you cannot (as per my knowledge) nicely mdadm that EFI, but you could certainly clone it, just in case you had a disk failure and rebooting, it would be nice to have it still boot up. I found it beyond scope what you had been asking about though and it would need some experimenting. This is even different across systems, I remember it was possible in Fedora to have it appear as mdadm when booted, it was using superblocks at the end already then and you would just mkfs.fat on that partition so your EFI was happy. I think in Debian based systems you have to go around it by tweaking GRUB - you may want to post it as whole separe question as more experienced people with mdadm may chip in.
For me it's been mostly ZFS and lately even BTRFS, prefer LUKS below them, in fact full drives if it makes sense (cannot do SED). I still like LVM for modularity's sake. You can really play around with it and later tweak. Just be aware ZFS (or BTRFS) are conceptually different from "normal filesystems" and ZFS will be eating some RAM now too. You really have to go around using it and see for yourself.