ZFS on host node high CPU and system load

May 3, 2021
82
10
8
In our prod setup, we run Truenas devices, which are based on ZFS and present that to our clusters over NFS and it works well, great performance, no complaints.

I am testing a few nodes that will have local storage, 8x1TB SSD in ZFS raid10 and vm's stored as zvols.

However any amount of IO inside any of the guest VM's drives the host CPU and load up to crazy levels, like 20-50 load and 500% CPU, tons of zvol processes and zfs_wr_int processes.

Is this a known issue with ZFS on the host node and just something you have to live with?

Node is AMD R9 5950X, 128G Ram, and 8xMicron 5200 Pro SSD's @ Sata 6G in a R10 setup.

See attached screen cap - I am running a single VM on the node running FIO doing 50/50 read/write inside the guest.

Screen Shot 2022-04-10 at 2.27.35 PM.png
 

Dunuin

Famous Member
Jun 30, 2020
7,232
1,725
149
Germany
Did you set the pools block size from the default 8K to 16K (Datacenter -> Storage -> YourZFSPool -> Edit -> Block size) so that newly created zvols will be created with a volblocksize of 16K instead of 8K?
.
 
May 3, 2021
82
10
8
Is anyone running ZFS on the host node using zvols with a bunch of VM's doing lots of IO and not having cpu load issues?

If so, pls share your setup. Crazy I can drive the host node load up to 30+ with a single VM and some IO inside that VM.
 

virtus223

Member
Sep 2, 2020
14
2
8
44
These SSD´s are made for intensive read operations. Did you change your zfs cache settings? can you post also the zpool configuration, and zfs get all?
 
May 3, 2021
82
10
8
I set arc max to 8g and min to 4g. I get they are read intensive but to still have zvol take up that sort of cpu load on the host node is nuts.


Code:
root@pve4:~# zpool status
  pool: rpool
 state: ONLINE
config:

    NAME                                       STATE     READ WRITE CKSUM
    rpool                                      ONLINE       0     0     0
      mirror-0                                 ONLINE       0     0     0
        nvme-CT2000P5PSSD8_2141322EE6CB-part3  ONLINE       0     0     0
        nvme-CT2000P5PSSD8_2141322EFB6F-part3  ONLINE       0     0     0

errors: No known data errors

  pool: tank
 state: ONLINE
config:

    NAME                                            STATE     READ WRITE CKSUM
    tank                                            ONLINE       0     0     0
      mirror-0                                      ONLINE       0     0     0
        ata-Micron_5100_MTFDDAK960TCB_1728215B340D  ONLINE       0     0     0
        ata-Micron_5100_MTFDDAK960TCB_17151D76999F  ONLINE       0     0     0
      mirror-1                                      ONLINE       0     0     0
        ata-Micron_5100_MTFDDAK960TCB_17151D765BC1  ONLINE       0     0     0
        ata-Micron_5100_MTFDDAK960TCB_1728215C362E  ONLINE       0     0     0
      mirror-2                                      ONLINE       0     0     0
        ata-Micron_5100_MTFDDAK960TCB_1726215A71DC  ONLINE       0     0     0
        ata-Micron_5100_MTFDDAK960TCB_1728215C390D  ONLINE       0     0     0

errors: No known data errors
root@pve4:~#

zfs get all is too big to post (forum software complained), what parts do you want to see?
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get your own in 60 seconds.

Buy now!