I've successfully built a Debian 9 host with Proxmox and ZFS. My understanding is that with the availability of resource sharing (e.g., "balloon" memory) it is common to overcommit resources. In other words, on a machine with 64GB, with several VMs running the same OS, I might be able to get away with assigning 72GB of RAM to all those VMs without performance degradation. Likewise, I might be able to assign more vCPUs than I have threads assuming that not all VMs will be hitting 100% cpu usage at the same time.
But ZFS is very sensitive to memory usage, and possibly CPU availability. Is it a good idea to make sure at least one or two threads, and a few gigs of RAM is available for ZFS? In other words, is overcommiting RAM or CPU resources a bad idea if you're running ZFS on the same host?
But ZFS is very sensitive to memory usage, and possibly CPU availability. Is it a good idea to make sure at least one or two threads, and a few gigs of RAM is available for ZFS? In other words, is overcommiting RAM or CPU resources a bad idea if you're running ZFS on the same host?