RAID sata / sas / other tips advice needed

mmenaz

Renowned Member
Jun 25, 2009
835
25
93
Northern east Italy
Hi, I've had a Fujitsu TX200 to experiment a bit, and got confusing results:
It has:
- Intel Xeon CPU E5520 @ 2.27GHz
- 12GB ram
- RAID controller: LSI Logic MegaRAID SAS 1078 with 512MB cache and BBU (write back)
CPU BOGOMIPS: 36266.75
REGEX/SECOND: 640637

proxmox:~# pveversion -v
pve-manager: 1.5-10 (pve-manager/1.5/4822)
running kernel: 2.6.24-11-pve
proxmox-ve-2.6.24: 1.5-23
pve-kernel-2.6.24-11-pve: 2.6.24-23
pve-kernel-2.6.18-2-pve: 2.6.18-5
qemu-server: 1.1-16
pve-firmware: 1.0-5
libpve-storage-perl: 1.0-13
vncterm: 0.9-2
vzctl: 3.0.23-1pve11
vzdump: 1.2-5
vzprocps: 2.0.11-1dso2
vzquota: 3.0.11-1
pve-qemu-kvm: 0.12.4-1
proxmox:~#

With Raid5 and 3xSAS 146GB 15K segate 6gb/s HD I've got:
proxmox:~# pveperf
HD SIZE: 66.93 GB (/dev/mapper/pve-root)
BUFFERED READS: 301.54 MB/sec
AVERAGE SEEK TIME: 4.69 ms
FSYNCS/SECOND: 3322.93

With RAID5 but 3xsata "normal" HD, 500GB:
HD SIZE: 94.49 GB (/dev/mapper/pve-root)
BUFFERED READS: 235.00 MB/sec
AVERAGE SEEK TIME: 8.58 ms
FSYNCS/SECOND: 3374.63

And with RAID10, 4xsata "normal" HD, 500GB:
HD SIZE: 94.49 GB (/dev/mapper/pve-root)
BUFFERED READS: 245.32 MB/sec
AVERAGE SEEK TIME: 8.71 ms
FSYNCS/SECOND: 3305.09

So my surprise are (considering FSYNCS/SECOND):
a) Sata raid5 is fast as much as SAS, even if access time is double
b) Sata raid10 is not faster than Sata raid5
Is FSYNCS/SECOND good indication of "real life" performance? Is considering write speed as well? How do you test performance in a reliable way?

In addition I'm wondering: what about have a "normal" sata HD for Proxmox system installation and "basic" local storage, and then add RAID storage (SAS or Sata) LVM? Will I save "precious" fast hd space without compromising performances, or Proxmox is better run in fast HD too?

And what about have a SSD driver instead of a RAID controller + "n" fast expensive drivers? Should be cheaper and have much better performances (near 0 access time, high substained transfer rate, etc.). Ok, no raid = more risk, but for some installation I could live with a simple restore from last, daily, backup.
And since I use kernel 2.6.24 (waiting for .32 OpenVZ support), what about convert to ext4? Would manage I/O better in general and stress less SSD if I go for SSD?

Finally, I would love to be able to bboot and install directly 2.6.24 kernel, I had problems with 2.6.18 and hd order (raid was sdb and backup 1TB sata hd sda! Kernel 2.6.24 put them in the correct order, so I had to install with sata 1TB unplugged, upgrade kernel and re-plug sata), or I could imagine have an hardware that boots only with recent kernels.

Thanks a lot in advance for tips and clarification and sorry for the maybe too long post
Marco Menardi
 
I won't comment on pveperf but I have noticed that SAS disks cope with more concurrent load then SATA disks. I don't have numbers, that is just my "feeling". Think of it like a highway - the SAS disks will let 50 fast cars on whereas the SATA disks will only let 10 on. Terrible analogy :)

If you don't have a SAN then for resilience and simplicity I would just install it all on the RAID. Personally, RAID 10 over as many small but fast SAS disks as you can get and you won't regret it. I do think it is worth popping in a couple of SATA disks to hold nightly backups before they get rsynced/archived offsite.

Either SAS or SATA you are likely to hit an IO bottleneck (and remember that VMs don't exactly fly in terms of IO :)), at least with SAS you won't be thinking "oh, I wish I had gone with SAS". No experience with SSD.

(usual disclaimer - if someone else with a different opinion sounds more convincing then believe them :))
 
pls note, pveperf is just a very very basic performance tool. the constant fsync/sec shows that the raid controller is fine. if you have the choice, go always for sas disks! IO is the most important factor, always.

we have no real live experience with high load on ssd drives in raid, we did just some very basic tests with single samsung and intel (both MLC) SSD drives.

if you want to install 2.6.24 you can still use the 1.4 iso image and upgrade to the latest 2.6.24.
 
Hi Tom,
I've 1.5 CD but installs 2.6.18, or am I missing something? I know I can upgrade later (of course is what I did), but the problem is that sometime the 2.6.18 kernel does not work fine (like the sda/sdb swap, i.e. I've disk A and disk B, 2.6.18 sees A = sdb and B sda, while 2.6.24 is A = sda, and B = sdb. So I had to remove B, install 2.6.18, upgrade to 2.6.24, then re-connect B).
As I stated in my original post, I'm interested in SSD NO raid, just single ssd like you tested. What was your feelings about SSD? What about Ext3 vs Ext4?
Thnaks
 
Should be so, but since RAID controller is very powerful and bright, I've the feeling that is able to make 50 cars go through sata too.
In addition if you want high capacity, SAS cost A LOT more than sata.
I'm sure that sas, 4ms access time, 15K rotation speed MUST be better than sata, 8ms access time, 7K rotation, but wondering when I can really appreciate the difference (I've no high profile customers). Maybe WD velociraptor could be the "intermediate" way.
Thanks a lot for sharing your experience / feelings :)
 
Hi Tom,
I've 1.5 CD but installs 2.6.18, or am I missing something?

yes, you missed that I talked about the 1.4 ISO (with Kernel 2.6.24).

I know I can upgrade later (of course is what I did), but the problem is that sometime the 2.6.18 kernel does not work fine (like the sda/sdb swap, i.e. I've disk A and disk B, 2.6.18 sees A = sdb and B sda, while 2.6.24 is A = sda, and B = sdb. So I had to remove B, install 2.6.18, upgrade to 2.6.24, then re-connect B).
As I stated in my original post, I'm interested in SSD NO raid, just single ssd like you tested. What was your feelings about SSD? What about Ext3 vs Ext4?
Thnaks

I think we will move in Proxmox VE 2.x to ext4. OpenVZ is best working/tested with ext3, ext4 is still not that common yet.

About SSD: we have no production system with SSD so I cannot report real live reports -only test lab. And yes, WD velociraptor are ok.
 
...
So my surprise are (considering FSYNCS/SECOND):
a) Sata raid5 is fast as much as SAS, even if access time is double
b) Sata raid10 is not faster than Sata raid5
Is FSYNCS/SECOND good indication of "real life" performance? Is considering write speed as well? How do you test performance in a reliable way?
Hi,
pveperf test only the read-performance - this is with raid5 also good. But the write-performance is with raid10 much better like with raid5.
I had a raid10 with four wd raptor drives in production (now only for testing) - the difference to a four drive SAS-Raid10 (hitachi-drives) are very huge - especial for database (mysql).
In addition I'm wondering: what about have a "normal" sata HD for Proxmox system installation and "basic" local storage, and then add RAID storage (SAS or Sata) LVM? Will I save "precious" fast hd space without compromising performances, or Proxmox is better run in fast HD too?
I run the proxmox-system on a sata-raid1 and pvedata on sas-raid10 with good experiences.

And what about have a SSD driver instead of a RAID controller + "n" fast expensive drivers? Should be cheaper and have much better performances (near 0 access time, high substained transfer rate, etc.). Ok, no raid = more risk, but for some installation I could live with a simple restore from last, daily, backup.
And since I use kernel 2.6.24 (waiting for .32 OpenVZ support), what about convert to ext4? Would manage I/O better in general and stress less SSD if I go for SSD?
I have tested a ssd (samsung mlc) not with proxmox but as a spooldisk for backup (bacula) - the result are very bad - with concurrent writes and reads the performance are much slower (in the beginning the performance are ok, but this changed after a short time) as with a normal disk!!
In two weeks i can check the performance with Intel SLC-Drives (they will also used as backup-spool-disk). I will report my results in this forum.

Udo
 
I have tested a ssd (samsung mlc) not with proxmox but as a spooldisk for backup (bacula) - the result are very bad - with concurrent writes and reads the performance are much slower (in the beginning the performance are ok, but this changed after a short time) as with a normal disk!!

Probably because you can't access the TRIM functionality without special software (drivers) and/or your HBA doesn't support SSDs properly.
 
Probably because you can't access the TRIM functionality without special software (drivers) and/or your HBA doesn't support SSDs properly.
Hi,
thats right - but the controller of the SSD should care about the TRIM and not the os (or raid-controller). There are controller announced which support this, but i don't know if they available yet.

Udo
 
Of course the SSD has to support TRIM.

But AFAIK and AFAIR for example with Corsair SSDs the OS still has to send those TRIM-commands to the SSD itself.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!