If I understand things correctly, the relatively high capacity loss in RAID-Z configurations is a direct result of trying to group only a small number of sectors together, when doing I/O operations. Because of the way how ZFS works, that results in excessive padding with sectors that effectively are "lost" to the user.
For "normal" file system access, this is determined by the "recordsize" parameter. It is set to 128kB by default, which generally presents a reasonable compromise between capacity loss and performance cost. If you write lots of really small files, it could lead to excessive I/O amplification. But when tuned properly, that's manageable. And in combination with data compression, it usually hits a good sweet spot. If you know how mostly have large writes and if you want to improve space utilization, you could increase the "recordsize"; and vice versa, if you are desperate for better performance for small writes and don't mind losing capacity, you can decrease the "recordsize" all the way to about 8kB. Any less than that doesn't really make sense. And at 8kB you have huge padding overhead.
Things get more complicated when you use your ZFS drives not to store files, but to carve them up for use by virtual disk devices. This is what happens when running virtual machines instead of containers. Instead of the "recordsize" parameter, you now tune things with the "volblocksize" parameter. If you set it to the same 128kB, you'd get the same capacity utilization as when storing files. But since the virtualized guest operating system uses its own file system implementation on top of the virtual disk device, it is not aware of the underlying allocations in ZFS. And that typically results in really bad I/O amplification and poor performance. All the tuning that you can do for file storage in order to minimize excessive I/O amplification is mostly ineffective for volumes, and this is the reason why PVE defaults to an 8kB "volblocksize" with the expected cost in capacity.
This is one of the reasons for why I have very few virtual machines and mostly try to use containers instead. They access ZFS on the file system level and that can be tuned more easily. But there are good reasons why people want to use virtual machines. So, this is a trade-off everyone needs to decide themselves.
I think a lot of these concerns would go away, when virtual machines can access host storage as a virtualized file system. There in fact is support in PVE to do so. But when I last tried it, Windows kept bugging. I think the Windows driver is still very unreliable at this stage. I am not sure whether the Linux driver is any better, as I haven't tried it myself. And I am not even sure there is a MacOS driver.
But if I was a Proxmox engineer, I'd probably prioritize working on this code. It looks to me like something that would get considerably better performance out of existing hardware.
In the meantime, consider favoring containers over virtual machines, up the "volblocksize", or set up LVM in parallel to ZFS and manually balance where you keep your data.