What stripe size for RAID10 with 4 SAS2 HDD

Andre Köhler

New Member
Jan 9, 2012
16
1
1
Hattingen, Germany
Hi all,
Iam a kvm noob so this question....

What stripe size should be the right for my

4SAS HDD RAID 10 with proxmox ve and 4 guests (win xp , win 2003 server , sbs 2003, win 2008 server)

Is the size of the .raw file the point for calc the stripe size (so all raw are 100 - 2000 GB) .... So big raw = big stripe size.

Or ... Thats windows guest, so small stripe size is better...

Is kvm writing in small blocks or in big blocks?! And so small size or big stripe size?

What do you think is better?

The guest systems are use as test server/fileserver/mailserver.

I would like fast reading/writing...

Thanks for your help
 
Thanks for that good post.
Its a 3ware raid controller installed. So my question here is with what stripe size should i initial the raid on the first time?
In your post its for fs (impotant,too) but not for the maintain raid controller setting or ?!

So my question is what stripe size is the right for the raid controller. After that, i start with tune the ext3 on this raid like your post uwe right?

Its not easy aaahaa :) i loving to prog delphi and windows... But this... Only "sh**" but i would like (i must) know that ;)

So thanks for help
 
Thanks for that good post.
Its a 3ware raid controller installed. So my question here is with what stripe size should i initial the raid on the first time?
In your post its for fs (impotant,too) but not for the maintain raid controller setting or ?!

So my question is what stripe size is the right for the raid controller. After that, i start with tune the ext3 on this raid like your post uwe right?

Its not easy aaahaa :) i loving to prog delphi and windows... But this... Only "sh**" but i would like (i must) know that ;)

So thanks for help
Hi,
I use the stripe-size of 32k for an volumeset on 4 SAS-Disks as raid10 on a areca 1222 controller.
Performance is ok:
Code:
pveperf /var/lib/vz
CPU BOGOMIPS:      27292.95
REGEX/SECOND:      1095675
HD SIZE:           543.34 GB (/dev/mapper/pve-data)
BUFFERED READS:    443.78 MB/sec
AVERAGE SEEK TIME: 5.77 ms
FSYNCS/SECOND:     5163.54
DNS EXT:           52.33 ms
DNS INT:           0.50 ms
Udo
 
ok... second time that I understand it:

on your controller settings, your stripe size is 32k ? (Here my stripe size is 256k to much right?)
on my controler i make a RAID 10 Unit of 4 SAS2 HDD with a stripe size of 256k. There you have 32k?

Did you make the FS Tune from your post below??

Here my results:

Code:
root@proxmox1:~# pveperf /var/lib/vz
CPU BOGOMIPS:      54270.29
REGEX/SECOND:      1417247
HD SIZE:           49.22 GB (/dev/mapper/pve-data)
BUFFERED READS:    104.86 MB/sec
AVERAGE SEEK TIME: 10.18 ms
FSYNCS/SECOND:     3113.92
DNS EXT:           158.14 ms
DNS INT:           56.79 ms (vm.local)
root@proxmox1:~#
 
ok... second time that I understand it:

on your controller settings, your stripe size is 32k ? (Here my stripe size is 256k to much right?)
I'm not sure if your performance will be much better with 32k stripes, but with smaller stripes you use also on small disks-io more than one disk.
on my controler i make a RAID 10 Unit of 4 SAS2 HDD with a stripe size of 256k. There you have 32k?
right
Did you make the FS Tune from your post below??
I'm not sure - it's a time ago. I think i have aligned the partitions.
Here my results:

Code:
root@proxmox1:~# pveperf /var/lib/vz
CPU BOGOMIPS:      54270.29
REGEX/SECOND:      1417247
HD SIZE:           49.22 GB (/dev/mapper/pve-data)
BUFFERED READS:    104.86 MB/sec
AVERAGE SEEK TIME: 10.18 ms
FSYNCS/SECOND:     3113.92
DNS EXT:           158.14 ms
DNS INT:           56.79 ms (vm.local)
root@proxmox1:~#
looks like not so fast SAS-drives... there are SAS-drives on the market, which are only normal SATA-disks with SAS-interface. Fast SAS-drives have an seek time app. 4ms.

Buffered reads of 100MB are very low... are there any other IOs during test?

Udo
 
Andre has a very unusual little disk space (HD SIZE: 49.22 GB) devided to two disks.
I think, that in this case the performance of the disks ia very poor.
I use SAS 1 HD's with maximal 3 Gbit/s on the interface. The normal seek time is about 5 ms, the reads are over 100 MB/s per disk.
Modern SAS2 15k HD's are a litte faster,
 
So the HDD is a really SAS2 HDD no interface.
The 50 GB is because the other space is on an other lv in the pve group witch is sync by drbd.
The drbd is slow,too. But that can't be the drbd because (you see it) the normal(not sync drbd) is slowly...
Yes youre right uwe. There are other io request by the test, from the vm guest. I cant stop them, thats our server.

Now iam waiting for the init drbd sync, that i can start the guest on the other system and that i make the new controller stripe size in hope that will help.

Other ideas??

The vms are in:
/dev/drbd0

The source for drbd is /dev/pve/drbddata with a size from3.45 TB
The pve group is on the RAID10(normal proxmox installation) but the normal proxmox local storage path
/var/lib/vz is slow... Why that?
If the /var/lib/vz is fast and only the drbd is slow, ok than i would say drbd is the fail.
But so, i dont know why the lokal storage is sooo slow.
Its a supermicro with 4 hitachi sas2 hdd on an 3ware sas controller.

The only think, that i think it can be wrong is my stripe size from 256k

Or? What should i do now?

Thanks for your fast help and your understanding for my nooby question
 
So the HDD is a really SAS2 HDD no interface.
The 50 GB is because the other space is on an other lv in the pve group witch is sync by drbd.
The drbd is slow,too. But that can't be the drbd because (you see it) the normal(not sync drbd) is slowly...
drdb is nice but with 1GB-ethernet not fast enough (for me). With 10GB-Etherent or Dolphin-Nics it's very usefull (with infiniband i don't have experience yet).
But if I understand right you use "partition -> vg -> lv -> drbd -> lv" ?? Normaly take a real disk partition to use only "partition -> drbd -> lv".
Yes youre right uwe.
BTW. my name is udo, not uwe.
There are other io request by the test, from the vm guest. I cant stop them, thats our server.
look with "iostat -dm 5 sda" (apt-get install sysstat) for other IOs.
Now iam waiting for the init drbd sync, that i can start the guest on the other system and that i make the new controller stripe size in hope that will help.

Other ideas??

The vms are in:
/dev/drbd0

The source for drbd is /dev/pve/drbddata with a size from3.45 TB
The pve group is on the RAID10(normal proxmox installation) but the normal proxmox local storage path
/var/lib/vz is slow... Why that?
If the /var/lib/vz is fast and only the drbd is slow, ok than i would say drbd is the fail.
But so, i dont know why the lokal storage is sooo slow.
Its a supermicro with 4 hitachi sas2 hdd on an 3ware sas controller.
since i have 3 supermicro-boards in production i will never use this brand again...
Hitachi-hdds are also used by my benchmark (HUS154545VLS300).
I can't say much to the 3ware controller (i have one, but for fast machines i use areca). Do you have an BBU? Some raid controller need an fully charged BBU to enable write caching.
I suggest strongly to use two drbd-resources (one for each node). If you got an split-brain situation (and further or later you will get one) it's much easier to resolve this with separate volumes (e.g. node a have all VM with 100 till 199 and node b 200 till 299 and node a use drbd0 and node b drbd1 - so you see directly if a drbd-resource is free on one node (to be overwritten in case of an split-brain resolving)).
The only think, that i think it can be wrong is my stripe size from 256k

Or? What should i do now?

Thanks for your fast help and your understanding for my nooby question
I guess not that the striping make big differences.
Do performance-testing on a calm host.

Udo
 
So first ... Sorry udo!!! For wrong name.

So the drbd is on dev/pve/drbddata. The normal local storage is on /dev/pve/data.
The drbd is with protocol A and network bondering (so 2GBit) is enabled.

But i think the drbd doesn't matter here because the localstorage /var/lib/vz is sooo slowly.

Thats not the fail from drbd. That must be somethink between raiscontroller lvm fs or some other ... Or not?

The drbd is slowly,too ... Iam not wundering in that. The "normal space" is slow so the syncspace ,too.

But why isthe normal space so slow...
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!