- German - HowTo: Proxmox VE 3.0 with Software-RAID

Thank you!

(I wish the PMX installer didn't disable the advanced disk setup/partitioning during install.. it would make things like this a lot less painful)
 
Thank you!

(I wish the PMX installer didn't disable the advanced disk setup/partitioning during install.. it would make things like this a lot less painful)

the Proxmox VE ISO installer make live easier for the majority, just think of the LVM partitioning, calculate the free space in the volume group, etc.

a beginner with Proxmox VE is not able to do this manually.

for experts, there is already a great way to install: Debian installer, see http://pve.proxmox.com/wiki/Install_Proxmox_VE_on_Debian_Wheezy
 
Nice post.

This still valid?

root pve -wi-ao 3.72g

Only 4g for root will works without any problem?

Thanks.
 
FYI This has been broken with 3.2 and GPT by default now.

yes, it's a pain in the @$$ to work around.. I ended up converting GPT back to MBR, and doing it by hand... not fun... Guide needs to be updated.
 
Reading in Roadmap of V3.2 "ISO installer uses now always GPT", I thought OK, anyway I have to install it via debian on my rootserver.
Reading the Installation wiki I learn "Software RAID is not supported" and "at least two NIC´s".
So I can trash my dreams to use up-to-date proxmox f.i. on a Hetzner rootserver with i7, 32 GB RAM, 2x2TB, but one NIC, MBR and Software Raid?
 
Reading in Roadmap of V3.2 "ISO installer uses now always GPT", I thought OK, anyway I have to install it via debian on my rootserver.
Reading the Installation wiki I learn "Software RAID is not supported" and "at least two NIC´s".
So I can trash my dreams to use up-to-date proxmox f.i. on a Hetzner rootserver with i7, 32 GB RAM, 2x2TB, but one NIC, MBR and Software Raid?

no, you can still use latest packages also on Hetzner Hardware.
 
FYI This has been broken with 3.2 and GPT by default now.

I can confirm this no longer works with GPT disks and I'm not sure what steps should be replaced.
That script that Davide Lucchesi wrote based on the guide also doesn't work because it contains stuff for MBR and not GPT :\
 
the Proxmox VE ISO installer make live easier for the majority, just think of the LVM partitioning, calculate the free space in the volume group, etc.

a beginner with Proxmox VE is not able to do this manually.

for experts, there is already a great way to install: Debian installer, see http://pve.proxmox.com/wiki/Install_Proxmox_VE_on_Debian_Wheezy

First of all: thumbs up for Dominic for writing the PVE-on-software-RAID and for Tom to add a Wiki page on this.

When reading the English translation of the HowTo (I can read German but English is second nature :) it occured to me that:
- activating PVE on sw RAID is not an easy task;
- it might be useful to me since I discovered my mobos have a cheap "SmartRAID" on board;
- when activated in the BIOS ProxMox will see the RAID volume but also (still) the underlying physical members;
- hardware RAID will not be a safe way to for me to go.
 
Let's now turn from RAID to LVM:

When I recently tried to install PVE 3.x I was unhappily surprised to see my carefully prepared partitioning and formatting scheme to be messed up. Instead of complaining I started reading on LVM (from which I knew nothing). I concluded that adding LVM to ProxMox was not a bad idea. I agree however w DLasher that making it an option would have been more elegant. Giving it a second thought I even think LVM is a better way to go than PVE-on-RAID.
But now in favour of LVM over RAID there is one drawback when it comes to storage consumption I want to bring up:

When building a RAID set one takes a huge storage overhead penalty at first. The penalty will however not increase (in absolute terms) when one adds PVs to the RAID set later. At all times RAID will cost you one PV of storage overhead. In relative terms overhead diminishes according to the formula 1/(N-1) when adding N volumes to your RAID set.
 
On the other hand, storage overhead for an LVM VG (volume group) might be much lower initially (say M as the overhead percentage for a single PV, with M <<1.0, that is, very or quite low). However, as I understood it, LVM meta data is duplicated to every PV in the VG and M will increase for every PV added. So in absolute terms LVM storage overhead grows quadratically w N (~ # of PVs in VG). Fortunately the overhead only grows linearly with N on a per PV basis when adding PVs to a VG.

Tom (or any other reader), would you be willing to comment on this, or correct me if I'm wrong?!

Steijn
 
Last edited:
I found I had stored an older version of Proxmox 3.0, one which works with the script that Davide Lucchesi wrote, so I installed it on my system and was thus able to setup the software RAID 1 array without difficulty.
Because it looks like the script is no longer available, here it is:

Code:
#!/bin/sh
#
# pve_software_raid v1.0 - 2013/07/01
# Author: Davide Lucchesi <davide.lucchesi@ams-ix.net>
#
# Configure software raid in a fresh installed Proxmox VE 3.x system
#

levels="0 1 4 5 6 10"
disks=`ls /dev/sd[a-z] | wc -l`

usage() {
    echo "Usage: $0 [raid_level]"
    echo ""
    echo "Supported raid levels: $levels"
    echo ""
    echo "If not specified, raid_level will be guessed depending on the available disks."
    exit 1
}

# basic checks to see if the script can run safely
#
if [ `id -u` != "0" ]; then
    echo "ERROR: you need super user privileges to run this script."
    echo " ";
    usage
fi

if [ ! -e /sbin/mdadm ]; then
    echo "ERROR: please execute the following command before running this script:"
    echo "\tapt-get update && apt-get dist-upgrade && apt-get install mdadm"
    echo " "
    usage
fi

# check user parameters
#
if [ $# -gt 1 ]; then
    usage
fi

if [ ! -z "$@" ]; then
    if [ $1 = '-h' -o $1 = '--help' ]; then
        usage
    fi
fi

if [ $# -eq 1 ]; then
    res=`echo "$levels" | grep $1`
    if [ -z "$res" ]; then
        usage
    else
        level=$1
    fi
else
    # guess suitable raid level
    #
    if [ $disks -eq 2 ]; then
        # 2 disks: mirror
        level=1
    else
        val=`expr $disks % 2`
        if [ $val -eq 1 ]; then
            # odd number of disks: raid5
            level=5
        else
            # even number of disks: raid10 (performances!!!)
            level=10
        fi
    fi
fi

if [ -e /dev/md0 ]; then
#
# Done with first reboot, move original partitions to the raidsets
#
    grub-install /dev/sda --recheck
    for i in /dev/sd[b-z] ; do
        grub-install $i
    done
    grub-install /dev/md0
    update-grub
    update-initramfs -u

    # add the old boot partition to the boot raidset
    sfdisk -c /dev/sda 1 fd
    mdadm --add /dev/md0 /dev/sda1

    # migrate the LVM setup to the data raidset
    pvcreate /dev/md1
    vgextend pve /dev/md1
    pvmove /dev/sda2 /dev/md1
    vgreduce pve /dev/sda2
    pvremove /dev/sda2
    sfdisk -c /dev/sda 2 fd
    mdadm --add /dev/md1 /dev/sda2

    echo "Rebuilding raidsets, it can take a while: check /proc/mdstat for details."
    echo "Please wait until the process is complete, then reboot and check how it works."
    exit 0
else
#
# First execution, creating raidsets
#
    echo "Creating boot raid1 and data raid$level setups on top of $disks disks..."
    echo " "
    echo -n "Confirm? (y/N) "
    read conf
    if [ "$conf" != "y" ]; then
        echo "Giving up as requested."
        exit 1
    fi
    for i in `ls /dev/sd[a-z] | grep -v sda`; do
        sfdisk -d /dev/sda | sfdisk -f $i
        sfdisk -c $i 1 fd
        sfdisk -c $i 2 fd
    done
    
    boot=""
    data=""
    for i in `ls /dev/sd[a-z] | grep -v sda`; do
        boot="$boot $i"1
        data="$data $i"2
    done
    mdadm --create -l 1 -n $disks /dev/md0 missing $boot
    mdadm --create -l $level -n $disks /dev/md1 missing $data
    mdadm --detail --scan >> /etc/mdadm/mdadm.conf
    
    # Configuring system to boot from the raid1 partitions
    #
    mkfs.ext3 /dev/md0
    mkdir -p /mnt/md0
    mount /dev/md0 /mnt/md0
    rsync -vua /boot/ /mnt/md0/
    umount /mnt/md0
    rmdir /mnt/md0
    umount /boot
    sed -i "s:UUID.*boot:/dev/md0 /boot:" /etc/fstab
    mount /boot
    echo '#' >> /etc/default/grub
    echo '# software raid' >> /etc/default/grub  
    echo 'GRUB_DISABLE_LINUX_UUID=true' >> /etc/default/grub  
    echo 'GRUB_PRELOAD_MODULES="raid dmraid"' >> /etc/default/grub  
    echo raid1 >> /etc/modules 
    echo raid1 >> /etc/initramfs-tools/modules 

    echo "Initial raid setup completed, please reboot and type once more"
    echo "\t$0 $@"
    exit 0
fi
 
yeah - I thought that too - as a longtime user of ZFS I was pretty amazed by this approach.
but so many things doesn't work on zfs (live migration and on the fly backups for example) and zfs needs RAM - a lot of - so we decided to separate Storage and VM - so the vmserver has its ram for the vms and the storage has its ram for the zfs.

to speed things up - i am currently creating a raid1 out of 2x1tb ssd - for sql server and such things - and the rest is hooked though zfs over iscsi or nfs into the server...

but i got it to work with this howto:
(but be patient there a 2 typos in there regarding the copy target of the boot and the pvmove of sda3)

http://wiki.pratznschutz.com/index....nux_Software_RAID#F.C3.BCge_sda2_zu_md0_hinzu
 
It's very unfortunate that proxmox doesn't support software RAID out of the box.

I found some instructions how to convert proxmox 3.3 (i.e. post-GPT change) to sw RAID:

http://www.helpadmin.pro/how-to/how-to-proxmox/44/

I haven't tried it yet, but it looks like it should work - except for obvious update to jessie:

echo "deb http://download.proxmox.com/debian jessie pve-no-subscription" >> /etc/apt/sources.list.d/proxmox.list
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!