Hi,
while getting some infos about bhyve I stumbled upon this site: https://klarasystems.com/articles/virtualization-showdown-freebsd-bhyve-linux-kvm/
It's actually about comparing bhyve to KVM, but there is another thing: they compare IO performance of VMs installed in a zvol or in a raw file on a dataset.
From this site:
And:
This was tested on FreeBSD with bhyve. And to be honest: I never really bothered about performance of my VMs, but still I'm curious if anybody has tested this on Linux KVM and has some benchmark results available.
Thanks!
while getting some infos about bhyve I stumbled upon this site: https://klarasystems.com/articles/virtualization-showdown-freebsd-bhyve-linux-kvm/
It's actually about comparing bhyve to KVM, but there is another thing: they compare IO performance of VMs installed in a zvol or in a raw file on a dataset.
From this site:
Unlike OpenZFS blocksize, there’s usually a single, clear answer as to what storage type performs best under a given hypervisor. Under Linux’s KVM, there are three primary options—QCOW2 on datasets, RAW files on datasets, and direct access to ZVOLs as block devices.
QCOW2 is a QEMU-specific storage format, and it therefore doesn’t make much sense to try to use it under FreeBSD. Under Linux KVM, QCOW2 can be worth using despite sometimes lower performance than RAW files, because it enables QEMU-specific features, including VM hibernation.
This leaves us with RAW files on OpenZFS datasets, vs OpenZFS ZVOLs passed directly down to the VM as block devices (on Linux) or character devices (on FreeBSD). On paper, ZVOLs seem like the ideal answer to VM storage needs—but we’ve found them terribly unperforming under Linux for many years, so we didn’t want to blindly assume they would be performance winners under FreeBSD either.
And:
We know most people expect zvols to be the highest-performing storage option for virtual machines using ZFS-backed storage—after all, providing the guest with a simple character device seems much more efficient than forcing it to use a raw file as a sort of “fake” device. But the numbers don’t lie—the raw file outperforms the zvol handily here, with more than twice the 1MiB throughput and six times the 4KiB throughput.
Although I suspect this will surprise many readers, it didn’t surprise me personally—I’ve been testing guest storage performance for OpenZFS and Linux KVM for more than a decade, and zvols have performed poorly by comparison each time I’ve tested them.
This was tested on FreeBSD with bhyve. And to be honest: I never really bothered about performance of my VMs, but still I'm curious if anybody has tested this on Linux KVM and has some benchmark results available.
Thanks!