[Solved] Bad IO performance with SSD over NFS from vm

dg_

Active Member
Sep 28, 2016
29
0
41
45
Hello,

I have a problem with NFS performance from VE 4.3:
  1. iperf test between 2 proxmox nodes: 5Gbps (it's OK)
  2. IO benchmark from proxmox server to NFS: 22K IOPS (OK)
  3. IO benchmark from proxmox vm mounting NFS directly: 22K IOPS (OK)
  4. IO benchmakr from proxmox vm using local disk that it's stored in nfs from proxmox server: 1.5K IOPS (Very bad)

I can not understand why vm only can reach only 1.5K IOPS.


Proxmox

Version: pve-manager/4.4-5/c43015a5 (running kernel: 4.4.35-2-pve)
CPU:
# grep 'E5-2680 v3' /proc/cpuinfo |wc -l
48
RAM:
grep MemTotal /proc/meminfo
MemTotal: 264036248 kB
Local disks: SATA with RAID 1
Load: 14:22:05 up 22 days, 17:18, 1 user, load average: 8.22, 7.80, 7.56
Network: 10Gbps x 2 Active/Active, mode: balance-rr


NFS

Type: full SSD
Network: 10Gbps x 2 Active/Active, mode: balance-rr
Latency from proxmox server: 0.090 ms


VM of benchmark

vCPU: 16 cores
RAM: 12Gb
Disk: 50G (stored in NFS)
Driver: SCSI and VirtIO was tested with the same result.


Benchmark command

time fio --name=test --rw=randread --size=256MB --iodepth=1 --numjobs=64 --directory=/tmp/ --bs=4k --group_reporting --direct=1 --time_based --runtime=3600

--directory=/tmp/ depend of test. When i test with NFS mounted in vm, i use mount point directory instad /tmp/.

Why IO performance from vm is around 7% compared with IO performance using NFS directly?

Thanks.
 
I done more tests.

I tested with local SSD without RAID:
  1. From Proxmox server, test reached 97K IOPS
  2. From vm inside Proxmox, 5-6K IOPS (again.... 5-7% of real performance)
After, i converted images from qcow2 to raw and changed some options:

Select VM -> Hardware -> Select disk -> Edit:

Cache: Select "Direct Sync"
IO Thread: Enable checkbox

After that test reach 25K IOPS for vm with / under NFS. Is not perfect but is enough until i do a full test with XenServer.

Important!

This results are from 1 vm. Also, i tested doing it in 2 vms at the same time................. result was divided :(

So, i can confirm that i only have 25-30K IOPs per NFS storage for all vms (not for each!!!)
 
Last edited:
hey, have a look at the following:
https://forum.proxmox.com/threads/io-scheduler-with-ssd-and-hwraid.32022/#post-158763
pay special attention to gkovacs posts.
See if it makes any difference when doing these changes on the guest.


my VM's do not need as much IO, but making the change to noop on the guest gave me improvement during benchmarks (my NFS based VM's are even less reliant on IO).

The biggest impact i got when changing to noop on the hosts ssd's tho.
 
With my setup:
Proxmox host: SSD, no hardware raid controller, and scheduler=deadline
KVM guest: Debian Jessie, ext4, and scheduler=deadline gives the best performance.

Scheduler cfg in guest:
read : io=3274.5MB, bw=12977KB/s, iops=2129, runt=258346msec
write: io=841688KB, bw=3257.2KB/s, iops=532, runt=258346msec
Scheduler noop in guest:
read : io=3274.5MB, bw=38831KB/s, iops=6372, runt= 86339msec
write: io=841688KB, bw=9748.7KB/s, iops=1594, runt= 86339msec
Scheduler deadline in guest:
read : io=3274.5MB, bw=50344KB/s, iops=8261, runt= 66594msec
write: io=841688KB, bw=12639KB/s, iops=2066, runt= 66594msec
 
With my setup:
Proxmox host: SSD, no hardware raid controller, and scheduler=deadline
KVM guest: Debian Jessie, ext4, and scheduler=deadline gives the best performance.

Scheduler cfg in guest:
read : io=3274.5MB, bw=12977KB/s, iops=2129, runt=258346msec
write: io=841688KB, bw=3257.2KB/s, iops=532, runt=258346msec
Scheduler noop in guest:
read : io=3274.5MB, bw=38831KB/s, iops=6372, runt= 86339msec
write: io=841688KB, bw=9748.7KB/s, iops=1594, runt= 86339msec
Scheduler deadline in guest:
read : io=3274.5MB, bw=50344KB/s, iops=8261, runt= 66594msec
write: io=841688KB, bw=12639KB/s, iops=2066, runt= 66594msec
Forgot to write the test case:
fio --description="Emulation of Intel IOmeter File Server Access Pattern" --name=iometer
--bssplit=512/10:1k/5:2k/5:4k/60:8k/2:16k/4:32k/4:64k/10 --rw=randrw --rwmixread=80 --direct=1
--size=4g --ioengine=libaio --iodepth=8
 
With my setup:
Proxmox host: SSD, no hardware raid controller, and scheduler=deadline
KVM guest: Debian Jessie, ext4, and scheduler=deadline gives the best performance.

Scheduler cfg in guest:
read : io=3274.5MB, bw=12977KB/s, iops=2129, runt=258346msec
write: io=841688KB, bw=3257.2KB/s, iops=532, runt=258346msec
Scheduler noop in guest:
read : io=3274.5MB, bw=38831KB/s, iops=6372, runt= 86339msec
write: io=841688KB, bw=9748.7KB/s, iops=1594, runt= 86339msec
Scheduler deadline in guest:
read : io=3274.5MB, bw=50344KB/s, iops=8261, runt= 66594msec
write: io=841688KB, bw=12639KB/s, iops=2066, runt= 66594msec


which vDisk caching mode did you choose in proxmox gui ?
 
With my setup:
Proxmox host: SSD, no hardware raid controller, and scheduler=deadline
KVM guest: Debian Jessie, ext4, and scheduler=deadline gives the best performance.

Scheduler cfg in guest:
read : io=3274.5MB, bw=12977KB/s, iops=2129, runt=258346msec
write: io=841688KB, bw=3257.2KB/s, iops=532, runt=258346msec
Scheduler noop in guest:
read : io=3274.5MB, bw=38831KB/s, iops=6372, runt= 86339msec
write: io=841688KB, bw=9748.7KB/s, iops=1594, runt= 86339msec
Scheduler deadline in guest:
read : io=3274.5MB, bw=50344KB/s, iops=8261, runt= 66594msec
write: io=841688KB, bw=12639KB/s, iops=2066, runt= 66594msec

:S

This test is with local SSD.

VM: ubuntu 14.04

Jobs: 64 (f=64): [rrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrr] [0.6% done] [378.3MB/0KB/0KB /s] [96.9K/0/0 iops] [eta 59m:40s]
 
Is that a read-only test?

time fio --name=test --rw=randread --size=256MB --iodepth=1 --numjobs=64 --directory=/mnt/ssd_local --bs=4k --group_reporting --direct=1 --time_based --runtime=3600
 
:S

This test is with local SSD.

VM: ubuntu 14.04

Jobs: 64 (f=64): [rrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrr] [0.6% done] [378.3MB/0KB/0KB /s] [96.9K/0/0 iops] [eta 59m:40s]
What scheduler in host and vm?
What filesystem in vm?
 
What scheduler in host and vm?
What filesystem in vm?

noop [deadline] cfq (proxmox and vm)
ext4

96K using local SSD
25K using SSD over NFS mounted in proxmox not directly in the vm
 
Please remember that enabling IO Thread on a disk prevents making online backups.

Thanks. I disabled this option and i have the same performance and i can create backups :)
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!