[SOLVED] Poor I/O performance

openkiwi

New Member
Sep 22, 2018
8
0
1
39
Hi everyone,

I am running Proxmox VE at home to run a few "production" VMs such as DNS server, private Git repositories, etc on the following hardware:
  • Dell T110
  • Intel Xeon E31220
  • 16 GB RAM
  • 2x 1 TB Western Digital Green in RAID1

I am observing poor I/O delay statistics (up to 50% under load) even with my light use cases. I assume this must be due to the low performance Western Digital Green disks and I am thinking to replace them with two 500 GB SSDs.

However, I want to make sure the disks are the bottleneck before buying. Can you please give your opinion on the following numbers?

Code:
# pveperf
CPU BOGOMIPS:      24745.20
REGEX/SECOND:      2010453
HD SIZE:           97.93 GB (/dev/mapper/pve-root)
BUFFERED READS:    42.17 MB/sec
AVERAGE SEEK TIME: 50.66 ms
FSYNCS/SECOND:     7.71
# pveperf /var/lib/vz
CPU BOGOMIPS:      24745.20
REGEX/SECOND:      1982673
HD SIZE:           294.29 GB (/dev/mapper/pve-local)
BUFFERED READS:    24.40 MB/sec
AVERAGE SEEK TIME: 35.41 ms
FSYNCS/SECOND:     17.45

Thanks :)
 
Alright, but is this bad result only due to the low performance disks or is there something else going wrong with my system?

you just use the wrong disks, much too slow for a virtualization host.
 
you just use the wrong disks, much too slow for a virtualization host.

Well, let's buy a pair of Samsung SSDs to replace those slow spinning plates then :D ! Thanks for your feedback... I'm ordering those right now.
 
I wouldn't buy those from what I've read. Well depending on your config. I have two Samsung SSD's and have had to take them out.
 
I wouldn't buy those from what I've read. Well depending on your config. I have two Samsung SSD's and have had to take them out.

What’s wrong with those ones? They have great note on Amazon. Which one would you recommend then?

Is it possible to direct pveperf to test a specific HDD (other then /dev/mapper/pve-root)?

I only have a single virtual disk showing up since I am using a hardware RAID controller.
 
openkiwi, basically there's two reasons - if you ever want to use ZFS and from what I've read using a consumer grade SSD will age very quickly with a VM workload. There is all sorts of information out there, all in pieces but in my small months running Proxmox I see this coming up over and over. I've been running this thread which talks about my situation - but that's still not great to be honest - still high I/O to 50% and locks up a bit when doing copies. The solutions all seem to be buy expensive SSD's. And they really are expensive - or at least where I live in NZ anyway. There are some good hardware sites that go over this - www.servethehome.com was one of them.
 
openkiwi, basically there's two reasons - if you ever want to use ZFS and from what I've read using a consumer grade SSD will age very quickly with a VM workload. There is all sorts of information out there, all in pieces but in my small months running Proxmox I see this coming up over and over. I've been running this thread which talks about my situation - but that's still not great to be honest - still high I/O to 50% and locks up a bit when doing copies. The solutions all seem to be buy expensive SSD's. And they really are expensive - or at least where I live in NZ anyway. There are some good hardware sites that go over this - www.servethehome.com was one of them.

Good to know... I will not use ZFS though since I have a hardware RAID controller and want to keep all RAM for the VMs. I think I have no choice then try since I don’t want to buy enterprise grade SSDs at their premium price.
 
OK - depending on your work load and size of SSD-s you might want to up your time schedule then. In the disto's I've used it runs weekly, which may not be enough. If it isn't trimmed - you guessed it - performance suffers. A lot.
 
If it isn't trimmed - you guessed it - performance suffers. A lot.

That is one aspect - the main aspect is that consumer grade SSD will fail VERY fast. I tried a 750 EVO on a single node (laptop) with ZFS and PVE and the wearout indicator was down to 40% after 2 months. The write amplification is huge for those drives, so be warned.

Another aspect is the already bad performance of consumer grade SSDs in comparison to enterprise SSDs, please refer to this excellent site

http://www.sebastien-han.fr/blog/20...-if-your-ssd-is-suitable-as-a-journal-device/
 
I would suggest possibly looking at used or refurbished enterprise SSD's

I have three proxmox nodes which use SSD's

R620 - 1gb PERC H710P - 6x Intel DC S3610 400GB SSD's in raid 5 ( purchased 10 total refurbished from Newegg with 3yr warranty $89 each)
R620 - 1gb PERC H710P - 8x Samsung 850 Pro 256GB SSD's in raid 5 (purchased new)
LGA1366 self build system - 1gb LSI 9265-8I - 4x HP consumer 500GB SSD's in raid 5 (purchased new)

When I setup the raid volume I leave an extra 10% space (this maybe of importance especially with consumer drives)

I can say that the Intel enterprise drives perform by far the best and according to the smart information show almost full life left. Outside of the 6 used on the R620 Proxmox host, I use 4 of them in my Freenas SAN setup as raid 0 and presented over 10giabit SFP+ ISCSI multichannel to my 5 node proxmox cluster (yes raid 0....none HA VM's, with full backups). Prior to the Intel drives, I used the samsung 850 pro's and they always start off with good performance, but within a year fade. In my original SAN/NAS within a year they performed worse than my HDD array.

This may have to do with garbage collection or insufficient over-provisioning..... however since switching to the Intel DC S3610, I have had no issues and performance is what I would expect. My plan is to convert the remaining two nodes over to enterprise SSD's..... just need an extra $1400.... I accept donations :) HAHAHA

Again on Newegg the 400gb version with 3 yr warranty are only $89 USD and so far no issues with any of 10 drives I have purchased (running for about 1 year)

My output from R620 - 2x E5-2620 and 128Gb memory
Server is currently running about 10 VM's (windows, linux, freebsd)- out put of pveperf below

Code:
root@proxmox4:~# pveperf
CPU BOGOMIPS:      96023.40
REGEX/SECOND:      1291586
HD SIZE:           93.99 GB (/dev/mapper/pve-root)
BUFFERED READS:    1486.57 MB/sec
AVERAGE SEEK TIME: 0.15 ms
FSYNCS/SECOND:     4013.39
DNS EXT:           277.50 ms
DNS INT:           2.10 ms (shaulskiy.lan)
root@proxmox4:~# pveperf /var/lib/vz
CPU BOGOMIPS:      96023.40
REGEX/SECOND:      1314946
HD SIZE:           93.99 GB (/dev/mapper/pve-root)
BUFFERED READS:    1433.08 MB/sec
AVERAGE SEEK TIME: 0.16 ms
FSYNCS/SECOND:     3823.72
DNS EXT:           301.41 ms
DNS INT:           2.06 ms (shaulskiy.lan)
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!