Why would you want to have root externally?
Partially PTSD of how Ubuntu decided to support it with their first ZFS on root experience, combined with on top of LUKS, etc. The zsys auto-snapshots - no pruning, no nothing - were easily going to fill up everything just before the initramfs was refreshed and bootloader entry updated. Then the general pain in the neck of looking for a live boot of something that has zfs support, correct version at that, import previously unexported pool or faking the deviceid, etc., etc.
But most importantly, preference to keep things simple where the complexity is counterproductive, because ...
For me, the big advantage is exactly the ZFS part, e.g. snapshots before updates,
For a hypervisor specifically I really do not see value in being able to rollback anything, not after a botched apt upgrade, it is much simpler/safer to rsync the last known good state and overwrite the bootloader, it can be even there in an extra copy on spare partition and ready to go, also the install is very much identical across nodes, and so should remain after upgrades, so for consistency just keep the base and the node-specific configs to be able to recycle any moment (a total hardware failure strikes). I have yet to find my inner zen with how PVE does like to do its things with pmxcfs and symlinks from everywhere to everywhere.
smaller files (due to compression),
Strictly talking about root, this is not important in the scale of things.
a lot of space (from the pool),
Yep, still have it, if need be, the ZFS pool is all there, but it's a red flag if needed, as again would like to keep the hypervisor tiny (if a debian install can be still called that).
but still have quota and refreservation.
Yes, but e.g. 16G at the beginning of drive allows for anything, it can be taken out of the node and used with a different hypervisor or regular OS install without having to migrate even a large pool back and forth, really anything and if the quotas on the pool were set all wrong, no impact on the hypervisor. Also contingency space there. No point even for LVM, the 16G can be repartitioned copied out and back in anytime, for those in favour of LUKS, can have separate boot partition feeding the passphrase from e.g. network, but nowadays probably using the SED drives, so saving the CPU cycles with SSDs.
Swap is another matter that yielded a lof of crashes in the past as it was on a zvol and the system under low-memory. I switched to zram years ago as primary memory and put only secondary swap on the optane drives where the slog lives.
This is a good tip, I would normally not have the luxury of anything slog-worthy, but L2ARC worked beautfilly even with HDDs and a fast NVMe. I started with ZFS very long ago for data storage only, for which it is still my favourite, then with LXD also quite some time ago it was very very convenient for the containers storage pool, but in turn not as nice as BTRFS in how it refers to a parent dataset, but that's another topic. So basically I somehow prefer to use ZFS for what it's best and leave the simple things simple wherever possible.
That all said, still figuring out how best to PXE boot whole hypervisor and keep in RAM. It's very strange to imagine installing 50 nodes (read it somewhere mentioned as realistic deployment size) from ISOs fully attended, when I ran into problem with a couple of nodes because of how the SSH keys are (not) cleaned up after a dead node. This would all be a non-issue even bad implementation if one had a ephemeral take on hypervisor upon every boot.
EDIT: For other systems, i.e. not hypervisor, the ZFS snaphosts on root felt good, but then there's also ostree way of doing things. But it's really to each their own, happy learn new things, pick up what suits, leave the rest behind, let everyone else do the same for themselves...