Hi Guys,
I've setup one of my new servers today for some testing.
Xeon Silver CPU
256GB DDR4
Samsung 1733 NVME drives
As long as the VM is running on local lvm-thin type storage, the disk performance in the VM is blazing fast (7GB/sec read, 4GB/sec write) with low cpu load.
As soon as I run the VM on ZFS storage (raidz, 10, 1 does not matter), the disk performance drops in half (boohoo only 4GB/sec read) and all my 16CPU cores go to 100% load. Wierd thing is, it's not the ZFS process that's eating the CPU.
it's the KVM worker threads that are blasting the CPU. Inside of the VM it shows no CPU usage at all. Anyone have an idea how?
Data:
Windows VM with Virtio stuff
"Host" setting CPU, 16 cores
Virtual disk options enabled: No cache, SSD Emulation, IO thread, Discard
ZFS pool compression off, atime off, arc=metadata only
I've setup one of my new servers today for some testing.
Xeon Silver CPU
256GB DDR4
Samsung 1733 NVME drives
As long as the VM is running on local lvm-thin type storage, the disk performance in the VM is blazing fast (7GB/sec read, 4GB/sec write) with low cpu load.
As soon as I run the VM on ZFS storage (raidz, 10, 1 does not matter), the disk performance drops in half (boohoo only 4GB/sec read) and all my 16CPU cores go to 100% load. Wierd thing is, it's not the ZFS process that's eating the CPU.
it's the KVM worker threads that are blasting the CPU. Inside of the VM it shows no CPU usage at all. Anyone have an idea how?
Data:
Windows VM with Virtio stuff
"Host" setting CPU, 16 cores
Virtual disk options enabled: No cache, SSD Emulation, IO thread, Discard
ZFS pool compression off, atime off, arc=metadata only