Install on Soft Raid

My setup is using 1.3 .

If you want a screenshot I can post one :P
OK, but my question is if with kernel updates the setup doesnt broke, like diermar told in an old post, can you understand the question? sorry about my english, i am from argentina.
thks
 
Dietmar, may question was:
"Still with the 1.3 version? or now we are able to use proxmox kernel with swraid without problems in kernel update?"

We do not support software raid - So I do not test such things.

- Dietmar
 
Re: software raid1 seems to work ok

I'm trying to setup a RAID test server using this method, (I've run out of free slots to insert a RAID card) but I have no idea about what should I insert at the modules list for initramfs-tools, at fstab and how to edit a grub list entry to point to the new initrd image.

Is there a more detailed guide to accomplish software RAID on Proxmox?

Any guide will do, as long as it works ;)

Thx!

I have a testing box that didn't have a hardware raid card in it so I decided to see if I could get this going for grins. I'm fairly certain this whole setup would break at the least desirable moment and I certainly won't be using this for any production boxes, but it was an interesting diversion into the world of software raid which I'd never used before. The basic order I followed is:

Install Proxmox as usual onto 1 drive(/dev/sda)
aptitude install mdadm initramfs-tools

Edit the modules list for initramfs-tools to force add the raid1 module

mkinitramfs -o /boot/test -r /dev/mapper/pve-root
add a grub list entry to point to my new initrd image

fdisk the 2nd disk to look exactly like the first disk
mdadm --create /dev/md0 --level=1 --raid-devices=2 missing /dev/sdb1
mdadm --create /dev/md1 --level=1 --raid-devices=2 missing /dev/sdb2
mdadm --detail --scan >> /etc/mdadm/mdadm.conf

pvcreate /dev/md1
vgextend pve /dev/md1
pvmove /dev/sda2 /dev/md1
vgreduce pve /dev/sda2

mdadm --add /dev/md1 /dev/sda2
watch -n 1 "cat /proc/mdstat"

mkfs.ext3 /dev/md0
mkdir /mnt/md0
mount /dev/md0 /mnt/md0
cp -ax /boot /mnt/md0

edit fstab to map the /boot to /dev/md0

sfdisk --change-id /dev/sda1 fd
mdadm --add /dev/md0 /dev/sda1

use grub to install bootloader onto the 2nd hard drive
 
Hi. I know this discussion is quite old now but I have the impression still up-to-date!!
I am of the ones who agree that having a "we do not like or support SW raid" is something against the use of proxmox ve. In our case we are just building a test server at our office replicating our online one. Without Proxmox we've been having SW raid for years with power failures and other issues with out major problems. We all know HW raid with batteries is the ideal but from my point of view is a question of probabilities... i would rather have a mirrored boot partition of my proxmox host than not have one for the cost of an additional hard drive. I would still have external backups, slave databases, etc on another server. But I would rather avoid reinstalling Proxmox, VM restore and any additional software we might have running on the host (to me one of the greates advantages of Proxmox VE) just because the most likely fail of a PC: the HD.

In any case I think no one is going to change their minds about this so as I'll be trying the lenny + apt install root rather than ISO I am just wondering two questions:

- we'll the debian repositories for VE be maintained at the same time. I mean will updates keep being published via that route?
- what about special kernel updates for VE... can those be accessed via the repositories too or as I install debian I have to use their kernel updates exclusively?

Many thanks to all the Proxmox team for their great work and effort in VE. It must be hard to have an Open Source software like this and just keep getting the feeling of everyone complaining.
 
Re: software raid1 seems to work ok

but I have no idea about what should I insert at the modules list for initramfs-tools

Just make that:

Code:
grep -q raid1 /etc/modules || echo raid1 >> /etc/modules
...before you make the initrd (in the example it´s called "/boot/test" - however I would recommend to call it somewhat more like a initrd should look like maybe "/boot/initrd.img-custom-version")

Open the file "/boot/grub/menu.lst" with your favorite text editor and search for the kernel you would like to boot for example:

"title Debian GNU/Linux, kernel 2.6.32-1-pve
root (hd1,0)
kernel /vmlinuz-2.6.32-1-pve root=/dev/mapper/pve-root ro quiet
initrd /initrd.img-2.6.32-1-pve"

Search for the initrd line and change (or add) the initrd line from the original to your custom made initrd (which would be /test) in your quoted example (or /initrd.img-custom-version in my example).



Change the line that mounts /boot to something like that:

Code:
/dev/md0        /boot           ext3    defaults        0       2


Edit: Made a mistake previous. My /etc/fstab looks like that:

Code:
# /etc/fstab: static file system information.
#
# <file system> <mount point>   <type>  <options>       <dump>  <pass>
proc            /proc           proc    defaults        0       0
/dev/mapper/pve-root /               ext3    errors=remount-ro 0       1
/dev/md0        /boot           ext3    defaults        0       2
#/dev/mapper/pve-data /var/lib/vz     ext3    defaults        0       2
/dev/mapper/pve-swap none            swap    sw              0       0
/dev/scd0       /media/cdrom0   udf,iso9660 user,noauto     0       0
/dev/fd0        /media/floppy0  auto    rw,user,noauto  0       0
And LVM stuff looks like that:

Code:
server:# pvs && vgs && lvs
  PV         VG   Fmt  Attr PSize   PFree
  /dev/md1   pve  lvm2 a-   931.00G 441.00G
  VG   #PV #LV #SN Attr   VSize   VFree
  pve    1  12   0 wz--n- 931.00G 441.00G
  LV            VG   Attr   LSize  Origin Snap%  Move Log Copy%  Convert
  root          pve  -wi-ao 96.00G
  swap          pve  -wi-ao  4.00G
  vm-101-disk-1 pve  -wi-a- 20.00G
  vm-102-disk-1 pve  -wi-a- 10.00G
  vm-104-disk-1 pve  -wi-a- 50.00G
  vm-105-disk-1 pve  -wi-a- 50.00G
  vm-106-disk-1 pve  -wi-a- 50.00G
  vm-205-disk-1 pve  -wi-ao 50.00G
  vm-206-disk-1 pve  -wi-ao 50.00G
  vm-207-disk-1 pve  -wi-ao 50.00G
  vm-210-disk-1 pve  -wi-ao 50.00G
  vm-211-disk-1 pve  -wi-ao 10.00G
server:#
server:#

Greetings,
user100
 
Last edited:
- we'll the debian repositories for VE be maintained at the same time. I mean will updates keep being published via that route?

Your "/etc/apt/sources.list" finaly should not really differ if you install a Debian-Lenny and PVE afterwards or if you install Proxmox VE from .iso. You need both. Debian for all non-modified files and Proxmox for all PVE files (including a patched kernel). When you install Lenny you would start with a 2.6.26 kernel that is not patched like the kernels from Proxmox. So I would recommend after you have installed Lenny to install the PVE files (including a pve-kernel qm, ...) and afterwards forget the 2.6.26 kernel but boot your system with a PVE one.

For example your /etc/apt/sources.list could look like that:

Code:
deb http://ftp.debian.org/debian/ lenny main
deb http://security.debian.org/ lenny/updates main
deb http://download.proxmox.com/debian lenny pve
...but I would not recommend to use ftp.debian.org but "ftp.your-country.debian.org" or another mirror near you (anyway if you would install from a Lenny .iso you normaly get asked which mirror you like to use).


Greetings,
user100
 
Not relevant at all to soft raid, but I've finally managed to get away with HW raid as suprisingly our Gigabyte GA-790FXTA-UD5 via 2 gSata (sata3) connectors (Marvell Chipset) was recognized by the Proxmox VE 1.5 installer with no issues. We still have to try to get any form of driver to check the RAID status running but I thought the info might benefit those that like me could not confirm if "normal" motherboard RAID would be working.
 
I don't understand why software raid is so big deal, its not hard to setup after proxmox install. I wrote RAID10 script for my needs and you could easily adapt it for yours, can be found here. I would not recommend anything beyond RAID10 or RAID1 because other raid configs slowing down machine fairly very well, especially if you run many guests. RAID5 or 6 could be used for a storage, like a backup server. I would prefer if proxmox team would implement user management than software raid, its totally doable right now as is, as long as proxmox team includes MD modules in their kernel.
 
I also would love to have an option in the Proxmox installer for Software RAID.

I believe that Software RAID is not so bad:p for example the following statement is made:
"Youtube took in its early days to scale their infrastructure up was to switch from hardware RAID to software RAID on their database server. They noticed a 20-30% increase in I/O throughput. Watch Seattle Conference on Scalability: YouTube Scalability @ 34'50". "
on the following website http://blog.zorinaq.com/?e=10.

It seems as if Intel also think that Software RAID is not too shabby:
http://www.theregister.co.uk/2010/09/15/sas_in_patsburg/
 
I also would love to have an option in the Proxmox installer for Software RAID.
Proxmox is an enterprise solution like ESXi and not even VMWare does think about using software raid nor is there any recommendation. I do work over a decade in enterprise storage business, my customers are dealing with Peta-Bytes of data and not one of them does even think about RAID on software level ... use software raid level in your playground, but I wouldn't entrust my personal data to this solution ...

It seems as if Intel also think that Software RAID is not too shabby:
http://www.theregister.co.uk/2010/09/15/sas_in_patsburg/
Well, it's always possible to arrange all arguments like you need it in discussions, but this approach from Intel has nothing to do with the known toy based approach of mdadm. You should better read the details of the article or better you should try to understand the difference ...

Regards
fnu
 
Last edited:
Re: software raid1 seems to work ok

I have a testing box that didn't have a hardware raid card in it so I decided to see if I could get this going for grins. I'm fairly certain this whole setup would break at the least desirable moment and I certainly won't be using this for any production boxes, but it was an interesting diversion into the world of software raid which I'd never used before. The basic order I followed is:

Install Proxmox as usual onto 1 drive(/dev/sda)
aptitude install mdadm initramfs-tools

Edit the modules list for initramfs-tools to force add the raid1 module

mkinitramfs -o /boot/test -r /dev/mapper/pve-root
add a grub list entry to point to my new initrd image

fdisk the 2nd disk to look exactly like the first disk
mdadm --create /dev/md0 --level=1 --raid-devices=2 missing /dev/sdb1
mdadm --create /dev/md1 --level=1 --raid-devices=2 missing /dev/sdb2
mdadm --detail --scan >> /etc/mdadm/mdadm.conf

pvcreate /dev/md1
vgextend pve /dev/md1
pvmove /dev/sda2 /dev/md1
vgreduce pve /dev/sda2

mdadm --add /dev/md1 /dev/sda2
watch -n 1 "cat /proc/mdstat"

mkfs.ext3 /dev/md0
mkdir /mnt/md0
mount /dev/md0 /mnt/md0
cp -ax /boot /mnt/md0

edit fstab to map the /boot to /dev/md0

sfdisk --change-id /dev/sda1 fd
mdadm --add /dev/md0 /dev/sda1

use grub to install bootloader onto the 2nd hard drive

I followed your steps to get software raid-5 working with proxmox. you left some steps out that I had to figure out on my own. To help the next person, here is what I did:

I purchased 6 identical SATA 1TB drives that I wanted to create a 5TB raid-5 with. Unfortunately the motherboard I purchased, a MSI 880GMA-E45, while it said in its specs that it does RAID-5, its some AMD fakeraid worthless junk. So either I can buy a $200 Highpoint RocketRAID controller that supports 6 SATA drives, or I can software raid my drives. I chose to get software raid working, and had great success.

1. Configured BIOS in AHCI mode for the SATA drives
2. Install proxmox as normal onto /dev/sda
3. Get internet access working on Proxmox if you haven't done so all ready
4. aptitude install mdadm initramfs-tools
5. edit /etc/initramfs-tools/modules (i like the text editor nano, so you would run nano /etc/initramfs-tools/modules after logging in as root at the text console) and at the very bottom have a new line with just the string raid5
6. run update-initramfs -u to update the boot process with the raid5 module for the kernel.
7. Reboot
8. login as root again. fdisk /dev/sda and then hit p to print your partitions. You will see that /dev/sda1 has a start cyl of 1 and an ending cylinder of some other small number. in my case it ends in 66, and partition 2 starts at cylinder 67. So on all the other drives, I will start partition one at cylinder 67 and go to the end of the drive.
9. So run fdisk /dev/sdb and create a (n) new partition, primary partition 1, start at cylinder 67, and end at the end of the drive. Change the type (L) to fd (Linux Raid auto). Then hit w to save and quit.
10. repeat for sdc, sdd, sde, sdf.
11. create the software raid 5 with 5 of the 6 disks necessary, so you will tell it that the raid is failed. Run the command mdadm --create /dev/md0 --level=5 --raid-devices=6 missing /dev/sdb1 /dev/sdc1 /dev/sdd1 /dev/sde1 /dev/sdf1
12. add the raid 5 to the mdadm admin config file with mdadm --detail --scan >> /etc/mdadm/mdadm.conf
13. create the lvm on the raid 5 with pvcreate /dev/md0
14. extend the lvm to include the raid 5 with vgextend pve /dev/md0
15. you need to move the proxmox data that resides on the lvm on /dev/sda2 onto /dev/md0 so you will run pvmove /dev/sda2 /dev/md0. This took 8 hours to run for me, so go take a nap. It will print a % progress counter on the screen every 30 seconds or so.
16. Once that is complete, you will remove the lvm on /dev/sda2 by running vgreduce pve /dev/sda2
17. you will need to create a software raid partition on /dev/sda2 now, so basically you will run fdisk /dev/sda2, delete partition 2, create a new partition 2 starting at cyl 67 in my case, change the type to fd again, and write it. You will get a warning about the partition table is in use and the kernel will read it on the next boot.
18. At this point you can either reboot for the new partition table to take effect, or aptitude install parted (or it might be partd) and then you can run partprobe to update the kernel's partition table without rebooting.
19. now you will need to add /dev/sda2 to the raid5. run mdadm --add /dev/md0 /dev/sda1
20. run watch -n 1 "cat /proc/mdstat" and wait a couple of hours while the raid5 is rebuilt with the new drive added. you won't lose any data.
21. now you need to increase the size of the lvm to expand to the rest of the drives in the raid 5. in my case I ran lvextend --size +3TB -n /dev/mapper/pve-data and then kept running lvextend --size +100GB -n /dev/mapper/pve-data until it gave me an error about not enough blocks available because I had less than 100G allocated. You can then go to smaller increments then 100GB to full up all of your HD space.
22. Now you will need to resize the ext3 filesystem to expand to the end of the drives. I had about 4 TB that I had to expand onto, so when I resized my ext3 filesystem, it took about an hour to run. So just run resize2fs /dev/mapper/pve-data and it will expand the ext3 filesystem.
23. You are done, now you can run df -h and you will see something like:

proxmox:/var/lib/vz/images# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/pve-root 95G 989M 89G 2% /
tmpfs 7.9G 0 7.9G 0% /lib/init/rw
udev 10M 824K 9.2M 9% /dev
tmpfs 7.9G 0 7.9G 0% /dev/shm
/dev/mapper/pve-data 4.4T 101G 4.3T 3% /var/lib/vz
/dev/sda1 504M 58M 422M 12% /boot

at this point you may want to reboot just to ensure that proxmox comes up ok, it should. I didn't have to make any changes to grub. I hope someone else finds this information useful.
 
Last edited:
Re: software raid1 seems to work ok

I was very brutal in my installation of Proxmox on a softraid: I installed proxmox, then copied root partition (adding filed from boot partition accordingly) into an archive, then partitioned the drives myself, made raid, copied the proxmox archive contents to the partition I consider root (note: no LVM!), reinstalled grub (I needed the latest grub 1.97 available for debian lenny, that is able to boot from a raid5 array), installed mdadm and rebuilt initramfs (some steps might be omitted - this is not a howto, just an idea) - and everything worked just fine :)
 
I want to voice my concern of completely unsupported (and discouraged) Software RAID support in Proxmox - I am shocked!

The reasons stated seem absurd; I run a Linux consultancy & have deployed hundreds of Linux servers in production environments using both Software RAID & Hardware RAID for close to 10 years, and Linux Software RAID, by far, has proven consistently to be the most solid[1].

Many hardware RAID implementations are horrible... and in some situations, slower.

Sure - ESXi & XenServer require a hardware RAID card. Why do you have to follow this route? Differentiate yourselves… this gives Proxmox a HUGE advantage in the market, particularly for smaller implementations where people are going to be looking for alternatives to the commercial virtualisation giants.

I really can't see there being huge support overhead in supporting this, either; LVM2 is certainly compatible. You create your LVM devices over the RAID md virtual devices, as per usual, no different. Works great, and has for many many years.

Reading this thread & the developers response… we will probably not go ahead with Proxmox until this is supported[2]. At the absolute least, even if the installer doesn't have explicit support for installing on Software RAID - it would be nice if there was an agreed-on method documented in the wiki to deploy on Software RAID (even if unofficially "supported") so those of us using it know any updates will not break/corrupt our data… deploying it this way on a production environment currently seems too risky.

edit:

[1] We deploy Software RAID in situations where we're running a RAID1 array or two; typical SME business server or 1RU colocated server style stuff, which I'm sure fits a LOT of the proxmox use cases. It has consistently proven itelf to be a very solid solution, and replacing degraded drives is simple and easy to do. We have had data corruption nightmares on many hardware RAID cards due to faulty controller/drive firmware, etc.

[2] My shock is more attitude based; the fact you guys are outright against supporting it for what I feel are very loose arguments. If the problem is development/support time... I'd be happy to throw development time at this, and I'm sure others would too. I really really want this supported. Do you take patches? Do you have a public repository (I can't find one anywhere - you guys are open source, right?)
 
Last edited:
no arguments? looks like you do not read our postings. here are again two points for you.

- performance, performance and performance. almost all virtualization hosts lack of IO performance and only hardware raid can deliver best IO. As we need local storage for OpenVZ fast local storage is essential for a worry free system.
- protection against power-loss. almost all software raid users enables hard-drive cache (otherwise you will really get horrible performance with softraid) but hard-drive cache is not protected by a BBU. so in the case of power failure you will loose this cache.

the future:
- SSD with integrated cache protection and high reliability like Intel SSD 320. (SSD drives are using raid controllers inside the package)
- New distributed storage system like sheepdog

conclusion: if you go for a commercial support subscription with us you cannot run software raid. in all other cases you can configure your host like you need it, full flexible, so no need to be shocked.
 
Thanks for replying. I read the posts; I just don't agree with them:

- Software RAID is faster than many Hardware RAID cards: there are many Linux professionals here with extensive production experience saying this.
- The drive cache argument is valid, but I'll bet anything a substantial % of proxmox VE users are using cheaper RAID cards without battery backup units, so this is no different. And although I agree with the fact that it can happen, I'm yet to see it (have seen it with HW RAID & no batt backup though!).

Regarding your last point; what I am concerned about is an upgrade of Proxmox VE (via apt-get etc) causing data corruption, or having a kernel update with no Software RAID support, etc, leaving a production system unusable. Are you saying this won't occur? I'm OK if your commercial support doesn't cover Software RAID: as long as I know things won't break.

Also, again, do you accept patches / is there a public repo for 1.x available? I read some proxmox features under Software RAID don't work correctly; I'd be happy to hack on this.

edit: clarified some points
 
Last edited:
I agree. And I would like to add that one can get rid of the drive cache problem using DRBD.
 
+1 for Tom.
Have used softraid on proliant and will never do again. Problem did occur, never on our HW raid servers.
So no point in supporting SWraid on production servers.

Sent from Android using tapatalk.
 
After 15+ years of experience on a wide variety of HW RAID setups, we are using almost exclusively SW RAID 10 for new systems now. Drives are big enough and cheap enough these days that RAID 5/6 (whether HW or SW) is just not worth the risks. A six drive RAID 10 provides quite impressive I/O performance.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!