IO at 100% in VM, Host is at ~10-20%

tschanness

Member
Oct 30, 2016
291
22
18
34
Hi,
we are having an IO Problem with (at least) one of our PVE hosts. The host has only one VM running which has 2 Drives:
upload_2018-6-6_11-32-33.png
One of the drives is /, the other one is a 10TB data mount.
The VM has a high IO utilisation:
upload_2018-6-6_11-31-39.png
But the Host does not:
upload_2018-6-6_11-32-3.png

as all the writes in the VM are going to the 10T HDD I don't think that IO Thread will bring a benifit.
What I'm wondering about is that the VM is so much slower than the Host. Is anyone else having the same Problem or a solution?

Thanks and Regards, Jonas
 

Attachments

  • upload_2018-6-6_11-32-28.png
    upload_2018-6-6_11-32-28.png
    11.9 KB · Views: 8
Change to RAID 1+0 to get better or dare I say normal performance.
Based on your pictures, I do not see how you can claim the host is faster.
Do you care to explain?
 
Hi, the host is only at 45% IO (maximum value btw), the VM is always at 100%. This is why I'm a little dumbfolded.
I know that Raid6 is not very fast but it should be way faster than 40-50MB/s?!
Thanks for the answer, but changing to Raid10 is not possible right now (the hardware will be phased out in a few months anyways.. - this is more personal curiousity..)
 
I guess you see different percentage because percentage is relative.
Because VM does not see what else host is doing it shows how much of IO is used on his side by processes he sees.
The host however does other things as well, and the guests IO is only a percentage of other stuff the host is doing.

Read/write throughput (in MB for example) depends on how many IO operations there are and how big the data blocks for each operation are.
If you read or write one big block with single IO operation it will have much bigger throughput that if you do it in chunks.
One can test that with dd. Here is an example in one of my VMs on hipervisor with ZFS and SLOG:
Code:
[root@localhost ~]# dd if=/dev/zero of=brisi bs=1024M count=1 oflag=dsync
1+0 records in
1+0 records out
1073741824 bytes (1.1 GB) copied, 3.35357 s, 320 MB/s
[root@localhost ~]# dd if=/dev/zero of=brisi bs=1M count=1024 oflag=dsync
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB) copied, 6.26051 s, 172 MB/s

BTW you can compare IOPS for your and my system by running pveperf on the host:
Code:
root@p24:~# pveperf
CPU BOGOMIPS:      80001.48
REGEX/SECOND:      1736221
HD SIZE:           890.46 GB (rpool/ROOT/pve-1)
FSYNCS/SECOND:     2435.09
DNS EXT:           73.50 ms
DNS INT:           0.78 ms

Your VM feels slow, because the systems has low IOPS.
 
Hi,
thanks for your time. The Raid6 has been faster before but after extensive testing it seems that it is indeed the bottleneck. Sorry for taking your time and thanks for your input.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!