trying to install proxmox with raid

Better speed after replacing /dev/pve/data /var/lib/vz ext3 defaults 0 1 (RAID-1) with
/dev/xxl/local2 /var/lib/vz ext3 defaults 0 1(RAID-10):

Code:
pveperf /var/lib/vz
CPU BOGOMIPS:      89372.97
REGEX/SECOND:      1202136
HD SIZE:           823.49 GB (/dev/mapper/xxl-local2)
BUFFERED READS:    362.19 MB/sec
AVERAGE SEEK TIME: 6.53 ms
FSYNCS/SECOND:     2934.65
DNS EXT:           46.23 ms
DNS INT:           3.53 ms (dataassure.com)
 
Hi, Mine's similar w/ PVE on sda SSD & data on sdb PERC 6/i RAID10 2x2x2 made of 6 2TB Seagate/Hitachi SATA II @ 7200 rpm.

replace the current /var/lib/vz ? If I do that change, will affect anything related to proxmox environment or not?
I wouldn't bother to move or replace that directory per se. I found it just as easy to add additional LVM storages for the virtual disks, and leave the local storage on it's own.

Since mine keeps iso's for example, on an nfs storage, I found that I had to edit the /etc/pve/storage.cfg file so that the local storage section's content line no longer included the ,iso entry- otherwise I had an extra and useless default entry in the web interface's 'iso storage' drop-down menu when I changed a VMs CD.

As for disk i/o, I thought you're higher rpm would do better than mine, they're only 7200 rpm and they're not SAS.
Mine don't have any PVE traffic, since that's on another SATA channel entirely- maybe that has a greater effect than we give it credit for.
The array is made with default 64KB stripe size, adaptive read-ahead & write-back since there's a battery on the controller.

I could be wrong, but I don't think a backplane would make much of a difference.
What's left to consider? PCIe? Mine's not on the PCIe bus, there's a riser just for it.
SATA Cables? No, they're SAS.
Do SAS cables have throughput ratings like SATA cables do? I never heard of that.

This is pveperf for a temporary LV on that array mounted to tmp, on an idle host, without irqbalance daemon.
Code:
Bascule:~# pveperf /mnt/tmp
CPU BOGOMIPS:      53199.93
REGEX/SECOND:      815116
HD SIZE:           19.69 GB (/dev/mapper/LDatastore0-testlv)
BUFFERED READS:    308.32 MB/sec
AVERAGE SEEK TIME: 7.22 ms
FSYNCS/SECOND:     2425.52
DNS EXT:           2002.83 ms
DNS INT:           2002.65 ms (cluster.sss.local)
Bascule:~# pveperf /mnt/tmp
CPU BOGOMIPS:      53199.93
REGEX/SECOND:      806488
HD SIZE:           19.69 GB (/dev/mapper/LDatastore0-testlv)
BUFFERED READS:    314.94 MB/sec
AVERAGE SEEK TIME: 7.37 ms
FSYNCS/SECOND:     2456.48
DNS EXT:           2002.41 ms
DNS INT:           2002.23 ms (cluster.sss.local)
Bascule:~# pveperf /mnt/tmp
CPU BOGOMIPS:      53199.93
REGEX/SECOND:      824835
HD SIZE:           19.69 GB (/dev/mapper/LDatastore0-testlv)
BUFFERED READS:    315.17 MB/sec
AVERAGE SEEK TIME: 7.18 ms
FSYNCS/SECOND:     2439.31
DNS EXT:           2002.81 ms
DNS INT:           2002.60 ms (cluster.sss.local)
Bascule:~# pveperf /mnt/tmp
CPU BOGOMIPS:      53199.93
REGEX/SECOND:      817236
HD SIZE:           19.69 GB (/dev/mapper/LDatastore0-testlv)
BUFFERED READS:    312.32 MB/sec
AVERAGE SEEK TIME: 7.27 ms
FSYNCS/SECOND:     2483.49
DNS EXT:           2002.53 ms
DNS INT:           2002.69 ms (cluster.sss.local)
Bascule:~#
 
Thank you for your & udo`s reply.

I wouldn't bother to move or replace that directory per se. I found it just as easy to add additional LVM storages for the virtual disks, and leave the local storage on it's own.

If I do this, I wont have the option to create openVZ`s on /dev/sdb (raid-10). The only option for it will be the original "local", which is to small... I can get use on the large LVM only for KVM`s.

Since mine keeps iso's for example, on an nfs storage, I found that I had to edit the /etc/pve/storage.cfg file so that the local storage section's content line no longer included the ,iso entry- otherwise I had an extra and useless default entry in the web interface's 'iso storage' drop-down menu when I changed a VMs CD.

That is my plan too, to have the ISO`s on a nfs storage. Good point with /etc/pve/storage.cfg.


As for disk i/o, I thought you're higher rpm would do better than mine, they're only 7200 rpm and they're not SAS.
Mine don't have any PVE traffic, since that's on another SATA channel entirely- maybe that has a greater effect than we give it credit for.
The array is made with default 64KB stripe size, adaptive read-ahead & write-back since there's a battery on the controller.

I could be wrong, but I don't think a backplane would make much of a difference.
What's left to consider? PCIe? Mine's not on the PCIe bus, there's a riser just for it.
SATA Cables? No, they're SAS.
Do SAS cables have throughput ratings like SATA cables do? I never heard of that.

The only thing which comes to my mind is the speed limit on PERC6i - 3Gbps, comparing with the HDDs, which is 6 or 8Gbps. Proxmox is installed on RAID-1, but only 2 drives. Can that affect the hole thing?!... don't know.
 
TI wont have the option to create openVZ`s on /dev/sdb (raid-10). The only option for it will be the original "local", which is to small... I can get use on the large LVM only for KVM`s.

Yeah, I didn't know about that limitation, I only use KVMs, sorry.

On that all I can suggest is to avoid the use of symlinks in the /var/lib/vz directory- that caused me headaches some time ago with host backups.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!