ZFS on host node high CPU and system load


May 3, 2021
In our prod setup, we run Truenas devices, which are based on ZFS and present that to our clusters over NFS and it works well, great performance, no complaints.

I am testing a few nodes that will have local storage, 8x1TB SSD in ZFS raid10 and vm's stored as zvols.

However any amount of IO inside any of the guest VM's drives the host CPU and load up to crazy levels, like 20-50 load and 500% CPU, tons of zvol processes and zfs_wr_int processes.

Is this a known issue with ZFS on the host node and just something you have to live with?

Node is AMD R9 5950X, 128G Ram, and 8xMicron 5200 Pro SSD's @ Sata 6G in a R10 setup.

See attached screen cap - I am running a single VM on the node running FIO doing 50/50 read/write inside the guest.

Screen Shot 2022-04-10 at 2.27.35 PM.png
Did you set the pools block size from the default 8K to 16K (Datacenter -> Storage -> YourZFSPool -> Edit -> Block size) so that newly created zvols will be created with a volblocksize of 16K instead of 8K?
I tested with 8k and 16k block size, and moved the VM off and back on, same results.
Is anyone running ZFS on the host node using zvols with a bunch of VM's doing lots of IO and not having cpu load issues?

If so, pls share your setup. Crazy I can drive the host node load up to 30+ with a single VM and some IO inside that VM.
These SSD´s are made for intensive read operations. Did you change your zfs cache settings? can you post also the zpool configuration, and zfs get all?
I set arc max to 8g and min to 4g. I get they are read intensive but to still have zvol take up that sort of cpu load on the host node is nuts.

root@pve4:~# zpool status
  pool: rpool
 state: ONLINE

    NAME                                       STATE     READ WRITE CKSUM
    rpool                                      ONLINE       0     0     0
      mirror-0                                 ONLINE       0     0     0
        nvme-CT2000P5PSSD8_2141322EE6CB-part3  ONLINE       0     0     0
        nvme-CT2000P5PSSD8_2141322EFB6F-part3  ONLINE       0     0     0

errors: No known data errors

  pool: tank
 state: ONLINE

    NAME                                            STATE     READ WRITE CKSUM
    tank                                            ONLINE       0     0     0
      mirror-0                                      ONLINE       0     0     0
        ata-Micron_5100_MTFDDAK960TCB_1728215B340D  ONLINE       0     0     0
        ata-Micron_5100_MTFDDAK960TCB_17151D76999F  ONLINE       0     0     0
      mirror-1                                      ONLINE       0     0     0
        ata-Micron_5100_MTFDDAK960TCB_17151D765BC1  ONLINE       0     0     0
        ata-Micron_5100_MTFDDAK960TCB_1728215C362E  ONLINE       0     0     0
      mirror-2                                      ONLINE       0     0     0
        ata-Micron_5100_MTFDDAK960TCB_1726215A71DC  ONLINE       0     0     0
        ata-Micron_5100_MTFDDAK960TCB_1728215C390D  ONLINE       0     0     0

errors: No known data errors

zfs get all is too big to post (forum software complained), what parts do you want to see?
Correct 6 drives, sorry.


I've also tested with different volblocksizes, from 4k, 8k, and 128k. There seems to be no agreement on the best volblocksize, at least searching through the threads here.
Last edited:
I have a similar hw-setup and the same symphoms, have you ever found a solution?
I have a similar hw-setup and the same symphoms, have you ever found a solution?
No, same thing on a new AMD EPYC with 8x NVME drives.

I don't see the load on TrueNAS when I present that as NFS to the nodes and run the same testing.


The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get your own in 60 seconds.

Buy now!