trying to install proxmox with raid

ressel

Well-Known Member
Mar 7, 2010
74
0
46
im trying to install proxmox with hardware raid on my server, but it doesn't fint my 1+0 raid.

When I have to choose harddrive to install proxmox on it finds the single 1,5 TB discs, not raid made from my 4 x 1,5 tb
it only finds following discs /dev/sda /dev/sdb /dev/sdc /dev/sdd

my raid controller is a sil 3124

what to do?
 
It's NOT hardware RAID!!! It's a soft/fake raid controller.
Get a new one if you want raid.
 
Yes, the controller is widely supported, but it's NOT real RAID. It's using software raid, in linux you can use the controllers soft/fake RAID with the "dmraid" package.
You would be better off using linux native software raid. (The software in the very small flash area of the very small controller does probably not perform as well as a full blown package in linux.)

Anyway, none of the mentioned methods are supported in PROXMOX. You're just out of luck!
 
are there some cheap preferred raid controller with 4 SATA ports that can run raid 10?
And that works with Proxmox?
Hi,
unfortunately is a cheap raid controller != fast (as far as i know).
And if you go for raid-10 i think you want a fast controller. I make good experiences with following (german site, but i'm sure you find them also in your country):
http://www.sona.de/.1494740244-Areca-ARC-1210-Raid-Controller-4-Port-intern

Works well with Proxmox - performance (with 4 wd-raptor raid10):
Code:
pveperf /var/lib/vz
CPU BOGOMIPS:      20871.33
REGEX/SECOND:      833553
HD SIZE:           519.02 GB (/dev/mapper/pve-data)
BUFFERED READS:    298.48 MB/sec
AVERAGE SEEK TIME: 7.11 ms
FSYNCS/SECOND:     4422.45
DNS EXT:           68.84 ms
DNS INT:           1.78 ms
But perhaps someone else know a cheap good raid-controller?

Udo
 
If cost is an issue try using software raid(mdadm), use on-board controller because third party drivers just complicate things. I been using software raid on many machines and it works great, I did not see one that did not work but its slower than hardware raid.
 
I installed proxmox on a dell r710 , with raid 10(hardware - perc6i), added a storage following http://pve.proxmox.com/wiki/Storage_Model : /dev/sbd - LVM(836GB) . I`m asking myself is that sounds right?! System was installed on /dev/sda1 (RAID1). Here is my fstab:


Code:
/dev/mapper/pve-root on / type ext3 (rw,errors=remount-ro)
tmpfs on /lib/init/rw type tmpfs (rw,nosuid,mode=0755)
proc on /proc type proc (rw,noexec,nosuid,nodev)
sysfs on /sys type sysfs (rw,noexec,nosuid,nodev)
procbususb on /proc/bus/usb type usbfs (rw)
udev on /dev type tmpfs (rw,mode=0755)
tmpfs on /dev/shm type tmpfs (rw,nosuid,nodev)
devpts on /dev/pts type devpts (rw,noexec,nosuid,gid=5,mode=620)
/dev/mapper/pve-data on /var/lib/vz type ext3 (rw)
/dev/sda1 on /boot type ext3 (rw)

Now, I did a speed test on drives and for me not looking good...:
Code:
V2:/opt# pveperf
CPU BOGOMIPS:      89372.21
REGEX/SECOND:      1114972
HD SIZE:           68.41 GB (/dev/mapper/pve-root)
BUFFERED READS:    114.37 MB/sec
AVERAGE SEEK TIME: 5.23 ms
FSYNCS/SECOND:     2472.41
DNS EXT:           38.17 ms
DNS INT:           18.65 ms (dataassure.com)

Code:
V2:/opt# lvdisplay
  --- Logical volume ---
  LV Name                /dev/xxl/vm-101-disk-1
  VG Name                xxl
  LV UUID                fRTb2U-Sh6d-5DHe-7Xth-kczh-1HCR-LWInkr
  LV Write Access        read/write
  LV Status              available
  # open                 1
  LV Size                10.00 GB
  Current LE             2560
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           254:3

  --- Logical volume ---
  LV Name                /dev/pve/swap
  VG Name                pve
  LV UUID                KXXPSw-L7O1-1tSo-MtiX-Ob61-nBiR-dP5VgS
  LV Write Access        read/write
  LV Status              available
  # open                 1
  LV Size                15.00 GB
  Current LE             3840
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           254:0

  --- Logical volume ---
  LV Name                /dev/pve/root
  VG Name                pve
  LV UUID                NA68Wi-3SKK-DtWi-8J1c-qkuh-YM0f-cCiTD1
  LV Write Access        read/write
  LV Status              available
  # open                 1
  LV Size                69.50 GB
  Current LE             17792
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           254:1

  --- Logical volume ---
  LV Name                /dev/pve/data
  VG Name                pve
  LV UUID                172Ufn-Crqi-h0w9-0ZBj-ZPgT-Glkg-rSGy4D
  LV Write Access        read/write
  LV Status              available
  # open                 1
  LV Size                189.88 GB
  Current LE             48608
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           254:2

Please advise.
 
Last edited:
...

Now, I did a speed test on drives and for me not looking good...:
Code:
V2:/opt# pveperf
CPU BOGOMIPS:      89372.21
REGEX/SECOND:      1114972
HD SIZE:           68.41 GB (/dev/mapper/pve-root)
BUFFERED READS:    114.37 MB/sec
AVERAGE SEEK TIME: 5.23 ms
FSYNCS/SECOND:     2472.41
DNS EXT:           38.17 ms
DNS INT:           18.65 ms (dataassure.com)
...

this performance is ok. why do you think its not good?
 
this performance is ok. why do you think its not good?

I expected higher BUFFERED READS..., I don't know why...but I did.

Anyway, if the Master said is ok, I will take that. Thank you.

Second issue/question which I have is related to LVM. After I create the LVM (raid10) - /dev/sdb - name "xxl" , I want to add that to the system(not only storage), so I can get use of it, not only for KVM, I want to use it for openVZ too. so when I create a VM, I want to choose the large storage which is "xxl". Do I need for that to format the LVM and mount it and create a Directory on LVM and use the web-interface --> Add Directory?

Code:
V2:/opt# vgscan -v
    Wiping cache of LVM-capable devices
    Wiping internal VG cache
  Reading all physical volumes.  This may take a while...
    Finding all volume groups
    Finding volume group "xxl"
  Found volume group "xxl" using metadata type lvm2
    Finding volume group "pve"
  Found volume group "pve" using metadata type lvm2

Code:
V2:/opt# vgdisplay
  --- Volume group ---
  VG Name               xxl
  System ID
  Format                lvm2
  Metadata Areas        1
  Metadata Sequence No  5
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                0
  Open LV               0
  Max PV                0
  Cur PV                1
  Act PV                1
  VG Size               836.62 GB
  PE Size               4.00 MB
  Total PE              214175
  Alloc PE / Size       0 / 0
  Free  PE / Size       214175 / 836.62 GB
  VG UUID               C42uFN-bfIa-kRB1-Pt6n-hwSx-iyzk-jdUfyH

  --- Volume group ---
  VG Name               pve
  System ID
  Format                lvm2
  Metadata Areas        1
  Metadata Sequence No  4
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                3
  Open LV               3
  Max PV                0
  Cur PV                1
  Act PV                1
  VG Size               278.37 GB
  PE Size               4.00 MB
  Total PE              71262
  Alloc PE / Size       70240 / 274.38 GB
  Free  PE / Size       1022 / 3.99 GB
  VG UUID               60Ce0p-K0xg-cdmt-8TUV-0GYW-Yv90-MpcfVA

Please advise. Thanks.
 
I expected higher BUFFERED READS..., I don't know why...but I did.
How much disks do you have as raid-10, and what kind of disks?
Is the host calm (IO) during test?
Perhaps the raid-controller aren't hi-performance...
Anyway, if the Master said is ok, I will take that. Thank you.

Second issue/question which I have is related to LVM. After I create the LVM (raid10) - /dev/sdb - name "xxl" , I want to add that to the system(not only storage), so I can get use of it, not only for KVM, I want to use it for openVZ too. so when I create a VM, I want to choose the large storage which is "xxl". Do I need for that to format the LVM and mount it and create a Directory on LVM and use the web-interface --> Add Directory?
Please advise. Thanks.
You can create a logical volume on your xxl (lvcreate -n openvz -L 100G xxl), create a filesystem on it (mkfs.ext3 /dev/xxl/openvz) and change the fstab to mount this to /var/lib/vz (save first the content of /var/lib/vz to the new filesystem).

Udo
 
How much disks do you have as raid-10, and what kind of disks?
Is the host calm (IO) during test?
Perhaps the raid-controller aren't hi-performance...

You can create a logical volume on your xxl (lvcreate -n openvz -L 100G xxl), create a filesystem on it (mkfs.ext3 /dev/xxl/openvz) and change the fstab to mount this to /var/lib/vz (save first the content of /var/lib/vz to the new filesystem).

Udo

Raid-10 has "VG Size 836.62 GB" with 10k RPMS and Raid Controller is Perc6(r)i - http://accessories.dell.com/sna/pro...l.aspx?c=ca&l=en&s=dhs&cs=cadhs1&sku=341-7043

IO is a bit busy, 4.87% and CPU below 1% during test.
 
Raid-10 has "VG Size 836.62 GB" with 10k RPMS and Raid Controller is Perc6(r)i - http://accessories.dell.com/sna/pro...l.aspx?c=ca&l=en&s=dhs&cs=cadhs1&sku=341-7043

IO is a bit busy, 4.87% and CPU below 1% during test.
Hi,
i mean not vg-size but which disks.
For example config on one server here:
4 * HUS154545VLS300 as raid-10 with areca 1222
Code:
pveperf /var/lib/vz
CPU BOGOMIPS:      27292.87
REGEX/SECOND:      1091721
HD SIZE:           543.34 GB (/dev/mapper/pve-data)
BUFFERED READS:    489.82 MB/sec
AVERAGE SEEK TIME: 5.59 ms
FSYNCS/SECOND:     5526.61
DNS EXT:           97.08 ms
DNS INT:           0.52 ms
Your seek-time looks also like SAS-Drives, so i guess it's the raidcontroller.

Perhaps someone who also use an perc6i can say more.

Udo
 
Hi,
i mean not vg-size but which disks.
For example config on one server here:
4 * HUS154545VLS300 as raid-10 with areca 1222
Code:
pveperf /var/lib/vz
CPU BOGOMIPS:      27292.87
REGEX/SECOND:      1091721
HD SIZE:           543.34 GB (/dev/mapper/pve-data)
BUFFERED READS:    489.82 MB/sec
AVERAGE SEEK TIME: 5.59 ms
FSYNCS/SECOND:     5526.61
DNS EXT:           97.08 ms
DNS INT:           0.52 ms
Your seek-time looks also like SAS-Drives, so i guess it's the raidcontroller.

Perhaps someone who also use an perc6i can say more.

Udo


Raid-10 is build on 6x 300GB SAS Seagate 10k RPM, Raid-1 2 x same drives. Raid Controller is 4 Channel SAS Prec6i.
System is installed currently on a raid-1 /dev/sda (correction, not raid-5) . Raid-10 is /dev/sdb . So your suggestion is to create a logical volume of Raid-10 - /dev/sdb and replace the current /var/lib/vz ? If I do that change, will affect anything related to proxmox environment or not? Thanks.
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!