ZFS high io. Again...

Noop is like no schedule op. Deadline it is a schedule io because have 2 different queue, one for write op and one for read op. 60 % of the total queue size is for reading io and 40% for write io. Also if I remember each io op has a max time to wait until is sent to the disk (so the name is deadline)
Now it is true that you can use in zfs a entire disk, and by default hdd schedule is noop. But in reality zfs can use only partitions for a good reson.
 
I'm experiencing huge CPU loads when doing sequencial read/write. The server becomes unresponsive until the transfer ends.

My setup:
2x Xeon E5 2620 v3
96GB RAM
12x 2TB SAS 7200RPM in RAIDZ2
1x ZIL
1X L2ARC
4x 1Gbps in LACP

I can do various bonnie++ benchmarks with outstanding results (~600 MB/s writes and about 1.6 GB/s reads)

Inside my VMs I get very good random I/O benchmarks aswel.

But...
When I try to copy a large file from another server into a VM i get a few seconds of 1Gbps transfers than it plumets and the server load sky rockets.

1571774533768.png
(Don't mind the gaps, I was testing various ZFS parameters)
 
HI,

Maybe the problem is on the "another server" !

It is not. I can transferir 2 streams of sequential data at 100MB/s without a sweat in that "another server".
Doing transfers between VMs generate the same absurd high loads.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!