Choosing best hardware for Proxmox

dsh

Well-Known Member
Jul 6, 2016
45
3
48
34
Hello, I need to know what should be need to get best IO performance from the server. Guests are Linux container running PostgreSQL and Windows server 2012 R2 running MSSQL.
Here is my server specs:
CPU: Xeon E5 2620v4
RAM: 64 GB DDR4 ECC

First I've tried ZFS striped mirror (raid 10 equivalent ) on 4 SATA 7200 enterprise HDD.

It was horribly slow.

fsync/seconds = 80

Then I added SSD as logging device:

fsync went to 120.

Eventually, I replaced all device with 4 x 1tb Mushkin Reactor. Now fsync is around 500ms.

Much better. But I still don't know if I am getting best performance especially after reading this from https://pve.proxmox.com/wiki/Raid_controller

"
Single SATA WD 400GB: 1360.17
3 x 15K rpm sas RAID5 with write-through: 159.03 (YES, only 159!)
same as above but with write-back enabled: 3133.45
"

It seems like hardware raid increase performance a lot and I'm planning to buy LSI Megaraid SAS 9271 1gb with BBU.
Question is if I use hardware raid 10 with battery and write back cache, is there a difference between using SSD or HDD?

I don't need high sequential speed of SSD, just need very high IO speed.

Your input will be greatly appreciated.
 
A SSD is not particular good at sequential speeds. A SSD is good at random speed and IOPS.
Do you think 4 SSD on Hardware RAID 10 will have better IOPS than ZFS striped mirror? Or overall better performance for database?
 
Check once again the question you answered:

"...Do you think 4 SSD on Hardware RAID 10 will have better IOPS than ZFS striped mirror?..."

The question is not "ssd vs hdd raid iops", but "hw-raid vs sw-raid iops". You answered "most probably", so I wonder if there is some test proving hardware-raid10 really has "most probably" better IOPS than ZFS striped mirror (which is basically software-raid10 with filesystem).

And to express my gratitude for learning me how to use google, here is my payback:
https://lmgtfy.com/?q=hw+vs+sw+raid+iops
 
I thought you were talking about your ZFS setup. Well then you can find your answer in your google query.

Jonas
 
"...Do you think 4 SSD on Hardware RAID 10 will have better IOPS than ZFS striped mirror?..."

Raw, benchmarkable power is better on the hardware raid than on the ZFS, but the real live tells another story:
If you put your hardware raid controller in an older box, then it'll outperform ZFS easily, because ZFS runs in your memory and on your cpus, while the hardware raid controller brings its own memory and cpu. RAID10 is not that expensive in cpu terms, so your hardware raid controller has no benefits there. If you have a big beast with a lot of RAM to use for ZFS and a lot of fast cores, the system will outperform the hardware raid controller easily just by sheer numbers. If you give your ZFS 64 GB of RAM vs. 1 GB on your controller, it's hard to keep up. So for your example, 64 GB for ZFS and VMs is OK, but more would be better.

In general (without hardware concers) it's still hard to determine, because ZFS can do so much more than only RAID, e.g. if you have compression enabled, you will reduce the number of iops necessary and increase the general throughput while storing less data on the disk. Also, the data on disk is more secure with ZFS as with any ordinary raid controller (one of the great features of ZFS). If you need to clone a machine it is "really" cloned not only copied and in that case you need a few iops instead of a million if you copy all the data.

e.g. for an oracle 12 database, you'll get a very good compression and therefore a very good throughtput:

Code:
root@proxmox4 ~ > du -m /rpool/data/subvol-7026-disk-1/opt/oradata/DB01/datafile/
136     /rpool/data/subvol-7026-disk-1/opt/oradata/DB01/datafile/

root@proxmox4 ~ > du -m --apparent-size /rpool/data/subvol-7026-disk-1/opt/oradata/DB01/datafile/
3129    /rpool/data/subvol-7026-disk-1/opt/oradata/DB01/datafile/

The write buffer of 1 GB on the raid controller is also negligible because you can expect to have a random write power in the same range as the SSDs itself.
 
As you said, ZFS is "much more" than software-raid. Based on this, it is maybe more accurate to compare hw-raid and mdadm. But even then it is not so easy to determine what is better (from performance point of view).

Even with hw-raid you can see huge differences in performance, depending on controller-type of the same vendor. On the other side, you can put zfs on hw-raid array as well, and reduce number of iops using compression.
 
On the other side, you can put zfs on hw-raid array as well, and reduce number of iops using compression.

Yes, but you'll fool yourself. ZFS will loose it's benefits for silent data corruption detection, atomicity and it is strictly advised not to do it by the developers of ZFS itself.
 
Thanks for input guys.

With current configuration i've disabled ZFS sync feature and fsync/s went to whopping 24000 (I know it's unsafe, just for the test).

Apparently, I didn't feel 50x improvement nor at least 1.5x improvement.

Like LnxBill I'd like to keep using zfs for to prevent from silent data corruption.

I guess my best move would be buying Intel DC S3710 for slog.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!