Urgent: High cpu usage in Proxmox ve 4 with ZFS

bitblue.lab

Member
Oct 7, 2015
75
0
6
I have high cpu usage in proxmox 4 with zfs from such processes: z_wr_iss,z_null_int,z_wr_int etc ?

Any idea why this is happening ?
 
I agree with you very bad advice...I guess problem is more related to compression or maybe any bug ?
 
I'm sorry to intrude on this thread but I'm experiencing the same...

Hardware is a
Lenovo TS140
Xeon E3-1245v3 (4 core, 8 thread)
16gb ECC RAM
2x500gb Crucial SSDs
2x1tb 7200rpm hard disks that came with the server.

I've experienced the same on two different disk sets:

1. Proxmox (Version: 4.0-57/cc7c2b53, community repo) is installed as the root filesystem on two 500gb crucial SSDs as a ZFS mirror.
There are two VMs running on it right now in qcow2 thin provisioning format (the server is not in production yet). One is 2012 R2 Essentials (three qcow2 disks - 150gb, 200gb, 200gb) and the other a Win10pro vm with a 100gb qcow2 disk. Both VMs use virtio drivers for the disks.

On this pool and with this setup I get occasional high CPU cases on all threads by mostly the z_wr_iss process where the whole computer becomes unresponsive and it's hard to even rdp into the vm. The web interface also becomes basically unusable. So it's a bit hard to diagnose what is going on inside the VMs exactly but in the few times I did manage to rdp in the resurce monitor did not show anything special going on (low cpu, memory and disk usage).
Both VM have fixed RAM (6GB and 4GB, the rest used by proxmox). I've also tried the same with dynamic memory with the same results.
The same problem arises eventually even with only the 2012 R2 VM running.
The problem happens randomly from what I have seen.
The only time I've been able to reliably introduce it is when I ran a move disk operation on the first 2012R2 disk as I wanted to move the disk from a qcow2 format to a zvol format to see if that would make a diffreence. The operation could not complete (it might have in a few days/years but I didn't have the patience or the time to wait).

2. On the same proxmox install I created a second ZFS pool as a mirror of the two 1TB drives. I did not add any cache or log drive to the mirror.
I then attached a 500gb zvol disk from this pool to the 2012R2 VM (the disks would be used for client backups among other things).
Here, I was able to induce the high cpu scenario reliably.
Firstly, when I copied a large file to the disk in the 2012R2 VM, everything went OK, cpu usage for the zr_w_iss process was in the teens if that.
But when I ran a client backup from the client computer, inevitably the really high cpu usage by ZFS writing would arise (again bringing the whole server to a crawl).
On these two disks I also tried a RAIDZ1 pool and the results were the same.
I've now attached the two disks directly the the 2012 R2 and made a ReFS Storage space mirror on them and they're working just fine (backups included).

Here's an example of the high cpu scenario from top (the usage can be even higher, it's what it is showing right now) - this is from the SSD scenario above:

top - 11:40:57 up 1 day, 10:47, 2 users, load average: 13.51, 15.37, 13.46
Tasks: 236 total, 7 running, 229 sleeping, 0 stopped, 0 zombie
%Cpu(s): 16.0 us, 59.2 sy, 0.0 ni, 7.8 id, 17.1 wa, 0.0 hi, 0.0 si, 0.0 st
KiB Mem: 16221352 total, 16032080 used, 189272 free, 0 buffers
KiB Swap: 15728636 total, 146296 used, 15582340 free. 54876 cached Mem

PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
16190 root 20 0 5054368 4.018g 1464 S 175.5 26.0 112:18.13 kvm
362 root 1 -19 0 0 0 R 100.0 0.0 111:51.61 z_wr_iss
21571 root 1 -19 0 0 0 R 86.4 0.0 0:02.60 z_wr_iss
14993 root 20 0 7929824 5.965g 1036 S 81.1 38.6 255:05.13 kvm
1881 root 20 0 250928 49636 2716 R 43.9 0.3 6:38.35 pve-firewall
21568 root 1 -19 0 0 0 R 37.6 0.0 0:01.13 z_wr_iss
21567 root 1 -19 0 0 0 R 37.2 0.0 0:02.16 z_wr_iss
21570 root 1 -19 0 0 0 S 20.9 0.0 0:01.00 z_wr_iss

Any help or pointers would be much appreciated as I really would like to use ZFS for this server but will have to switch to another system if I can't sort this out.
 
I did an update of kernel from apt-get update/upgrade then rebooted proxmox to start with new kernel and I have enabled lz4 compression in my tank zfs set compression=lz4 storagepool and I haven't seen the problem coming again.

Also make sure all VM disks are set as write back cache if you are using zil log in separate ssd, and as I know its best tip to use hard drive as IDE with raw format in VM-s and not other configrations like qcow2 or sata sci etc ?
 
and as I know its best tip to use hard drive as IDE with raw format in VM-s and not other configrations like qcow2 or sata sci etc ?
If you've configured ZFS storage, there will be no format--you won't be able to choose raw/vmdk/qcow2. The system will create zvols for your VMs. For the bus choice, the wiki (http://pve.proxmox.com/wiki/Installation#Virtual_Machines_.28KVM.29) says to use virtio. In the thread I started the other day (http://forum.proxmox.com/threads/24379-Virtio-vs-SCSI-disk-for-Linux-KVM-guests), the consensus seemed to favor virtio SCSI.
 
If you've configured ZFS storage, there will be no format--you won't be able to choose raw/vmdk/qcow2. The system will create zvols for your VMs. For the bus choice, the wiki (http://pve.proxmox.com/wiki/Installation#Virtual_Machines_.28KVM.29) says to use virtio. In the thread I started the other day (http://forum.proxmox.com/threads/24379-Virtio-vs-SCSI-disk-for-Linux-KVM-guests), the consensus seemed to favor virtio SCSI.

Links that you see for virtio and virto sci are not for zfs storage, but for normal usage of proxmox with ext4 etc...about raw/vmdk/qcow2 etc they are shown as options to VM disks when you create them and not to format ZFS Storage( I didn't say that he should use raw format for zfs because of course it will not format or choose any option ) IDE,RAW,QCOW2 etc are only for disks under VM-s when they are created in Proxmox.
 
The wiki doesn't distinguish among host filesystems or storage types when it says "as long as your guest supports go for virtio"; it just makes that as a blanket statement. There's certainly nothing on that page that says it's "not for zfs storage", and given the many ways storage can be configured in Proxmox, if this recommendation applied only to some (or one) of them, it seems that should be stated. My thread was asking specifically about using ZFS storage, and virtio or virto SCSI were the only live options there. I'm certainly no expert with Proxmox, but I haven't seen any sources stating that the best practice is to use IDE, even if the guest OS supports something else. Do you have any?

As to the virtual disk format, if you've configured ZFS storage, and you're placing the virtual disk on that ZFS storage, it will be a zvol; you won't have the choice of raw/qcow2/vmdk/etc. If you're using local storage, and that storage just happens to be on a ZFS pool, then you get those options. I don't have any insight into which to choose, though from what I've seen qcow2 seems to be preferred. But I suspect that it's a better practice, if you're using ZFS, to use a zvol rather than a virtual disk file of any flavor.
 
Hi,

thank you for your replies.

I've followed your suggestions but nothing seems to work. I've now found a way to reliably induce this high cpu usage on the SSD raid-1 zfs pool.

I start up the 2012R2 VM and copy the backup data from the directly attached disks in a storage mirror (details above) to a disk residing on the SSD zfs pool. After a while, the cpu usage skyrockets and I have to kill the VM to stop it.
I've tried attaching these types of disks with the same (bad) results:
as qcow2: RAW, IDE, VIRTIO
as ZFS datastore, both thin provisioned and not: RAW, IDE, VIRTIO, SCSCI with VIRTIO controller in the options.

I also installed OpenMediaVault in another VM, made a share on a VIRTIO disk and tried copying to it from the 2012R2 VM with the same bad result.

Frankly, I am at a loss at what else to try.

If I select writethrough instead of writeback the high cpu scenario doesn't happen though. But writes are sloooooow then.

Anybody have any other ideas I could try?

Addendum:
I just made an NFS share on the pve host itself to try if copying to the host itself would result in the same high cpu usage... It does not. While the speed varied a bit copying from the 2012R2 VM to the host itself went without a hitch with one(!) z_wr_iss process reaching the low teens...

So it has to be something with how data is written to the VM disks that causes these high z_wr_iss loads...
 
Last edited:
I have same problem today again with high io and cpu usage from z_wr_iss - I see now in updates a new kernel and zfs intrafms etc so I hope after update to these and reboot node it will be more stable! I will keep you updated with the results and in case of any fix.
 
I'm experiencing same problem with a configuration similar to blabbermouth: RAID-1 with SSD for cache. Host is Proxmox 4, updated to latest no-subscription repo.

Guest is Windows 2012 R2 with very little load right now, but latest part of installation of Adobe Reader DC, with its optimization, has taken high CPU usage of z_wr_iss.

This is host:

rrd.png
rrd2.png

And this is guest:

cpu-guest.png

I'm going to try other cache models, but any test you could need from me is welcome.
 
...but of course limiting the ARC size (especially limiting it to only 512 MB) means you're not doing nearly as much caching of your reads.

I'd been running into the same issue with a CentOS 6 guest. Whenever the guest would try to run a backup, it would behave as described here--the host CPU usage would climb to near 100%, and top would show typically that z_wr_iss was the process hogging the CPU. It seemed odd to me because I'd expect the guest OS backup to be read-intensive rather than write-intensive (the backup destination was on the LAN, on a separate box from the proxmox host). The RAM used on the guest would also approach its max of 8 GB. Hardware was a SuperMicro X9SCL-F board, i3-3240 CPU, and 16 GB of RAM. My pool is a two-disk mirror with no SLOG device at this time (but testing with sync=disabled didn't help the problem). The guest OS had been running on that same hardware, but with only 8 GB of RAM, for several months with no issues.

To test, I tried running the host on different hardware. I picked up a used Dell server off eBay with 2x Xeon X5650s and 48 GB of RAM, put a single 2 TB drive in it, installed PVE4 on it (formatting the drive as ZFS), copied the guest installation to it (pve-zsync made that easy enough), and ran a backup in the guest OS. No problems. I'm tentatively concluding that I'd just under-resourced the host machine. Of course, the first rule of scientific troubleshooting is to change only one variable at a time, and I've pretty flagrantly violated that--I've gone from two physical cores to twelve and tripled the RAM. @pizza's post would seem to indicate that this comes more on the RAM than on the CPU. FWIW, with two guest machines running (one with 8 GB of RAM allocated, and the other with 4 GB), this host is reporting 38 GB of RAM used.
 
Limiting ZFS memory seems to lower the load.

Problem arised in a machine with 16 GB RAM, with 4GB to 8GB ZFS memory, and with only a VM, with memory ballooning from 4GB to 8GB. No swap was used.

Code:
[COLOR=#000000][FONT=monospace]options zfs zfs_arc_min=[/FONT][/COLOR]4294967296
[COLOR=#000000][FONT=monospace]options zfs zfs_arc_max=[/FONT][/COLOR]8589934592

I'm trying with 2GB to 4GB ZFS memory.
 
Had the same problem, disabled KSMD and now its "stable", at least not locking down everything.
 
I've also inadvertently found that a scrub seems to trigger this.

System 1: X9SCL-F motherboard, i3-3240 CPU, 16 GB RAM, 1 running CentOS 6 VM with 4 GB/balloon to 8 GB, 1 running CentOS 6 VM with 512 MB/balloon to 1 GB, two-disk ZFS mirrored pool with no SLOG device. Scrub results in ~100% CPU usage, z_wr_iss taking most of the CPU. Scrub is running at about 1 M/sec. Web UI indicates about 13 GB of RAM used.

System 2: Dell C6100 node, 2x Xeon X5650, 48 GB RAM, 1 running CentOS 6 VM with 8 GB static, 1 running Ubuntu VM with 4 GB static, single-disk ZFS pool with no slog device. Scrub is currently running at 48 M/s. Web UI indicates 38 GB of RAM used, and total CPU load of about 7%.

My experience with FreeNAS tells me that ZFS is pretty RAM-hungry, but not to this extent.
 
I've also inadvertently found that a scrub seems to trigger this.

System 1: X9SCL-F motherboard, i3-3240 CPU, 16 GB RAM, 1 running CentOS 6 VM with 4 GB/balloon to 8 GB, 1 running CentOS 6 VM with 512 MB/balloon to 1 GB, two-disk ZFS mirrored pool with no SLOG device. Scrub results in ~100% CPU usage, z_wr_iss taking most of the CPU. Scrub is running at about 1 M/sec. Web UI indicates about 13 GB of RAM used.

System 2: Dell C6100 node, 2x Xeon X5650, 48 GB RAM, 1 running CentOS 6 VM with 8 GB static, 1 running Ubuntu VM with 4 GB static, single-disk ZFS pool with no slog device. Scrub is currently running at 48 M/s. Web UI indicates 38 GB of RAM used, and total CPU load of about 7%.

My experience with FreeNAS tells me that ZFS is pretty RAM-hungry, but not to this extent.

Sounds about right, ZFS will use half your RAM for ARC unless you tell it not to. Proxmox will use about 1GB for it's various processes.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!