Pros and Cons of software RAID

gijsbert

Active Member
Oct 13, 2008
47
3
28
I know that the "proxmox advice" is to use a good hardware raid controller with a BBU. We prefer however a software raid-1 configuration. The latest Intel SSD disks (i.e. DC S3500 Series) comes with data protection in case of a power loss so a BBU is not really necessary anymore. Can anyone tell me why a hardware raid with BBU (ignore the very little CPU overhead of software raid-1) still has advantage over a software raid?

Kind regards, Gijsbert
 
I know that the "proxmox advice" is to use a good hardware raid controller with a BBU. We prefer however a software raid-1 configuration. The latest Intel SSD disks (i.e. DC S3500 Series) comes with data protection in case of a power loss so a BBU is not really necessary anymore. Can anyone tell me why a hardware raid with BBU (ignore the very little CPU overhead of software raid-1) still has advantage over a software raid?

Kind regards, Gijsbert
Hi,
the "data protection" is perhaps an nice thing, but not really helpfull for the write memory cache of an software-raid (stripe_cache_size). Perhaps you can disable the write-cache for md-devices, but then you have less speed...

Udo
 
I know that the "proxmox advice" is to use a good hardware raid controller with a BBU. We prefer however a software raid-1 configuration. The latest Intel SSD disks (i.e. DC S3500 Series) comes with data protection in case of a power loss so a BBU is not really necessary anymore. Can anyone tell me why a hardware raid with BBU (ignore the very little CPU overhead of software raid-1) still has advantage over a software raid?

Kind regards, Gijsbert

I did install proxmox in everyway you can imagine.
Software raid with 2 drives raid 1 , raid 10 with 4 drives, Hardware raid with 4 drives and 2 ssds in raid1 flashcaching raid, single drive without raid but zfs in 4 drives , single drive with NFS in under xfs, etc etc etc...
If you ask me today what is the best way for a non problematic , powerful, without any headaches proxmox setup today: The answer is : Hardware raid - with at least 8 disks - better be SAS disks -
The more disks you throw to a Hardware raid array, the more powerful proxmox becomes.
Hardware raid can be a cheap one with 128Mb memory without BB etc, it does not matter.
I used very powerful raid card with tons of cache and BBU and non BBU 128 mb adaptecs, I see no difference.
IO DELAY is the thing that matters most. Throw tons of ram to a server, + a decent cpu + 8 disks in cheap Hardware raid 10 -- This is the best ever configuration...

Note: I am yet to install a system with more than 8 disks in cheap Hardware raid10 - I am gping to throw 16 or better 32 WD reds in raid 10 on my next setup. Let's see how it goes..
 
I did install proxmox in everyway you can imagine.
Software raid with 2 drives raid 1 , raid 10 with 4 drives, Hardware raid with 4 drives and 2 ssds in raid1 flashcaching raid, single drive without raid but zfs in 4 drives , single drive with NFS in under xfs, etc etc etc...
If you ask me today what is the best way for a non problematic , powerful, without any headaches proxmox setup today: The answer is : Hardware raid - with at least 8 disks - better be SAS disks -
The more disks you throw to a Hardware raid array, the more powerful proxmox becomes.
Hardware raid can be a cheap one with 128Mb memory without BB etc, it does not matter.
I used very powerful raid card with tons of cache and BBU and non BBU 128 mb adaptecs, I see no difference.
IO DELAY is the thing that matters most. Throw tons of ram to a server, + a decent cpu + 8 disks in cheap Hardware raid 10 -- This is the best ever configuration...

Note: I am yet to install a system with more than 8 disks in cheap Hardware raid10 - I am gping to throw 16 or better 32 WD reds in raid 10 on my next setup. Let's see how it goes..

Thanks for the feedback Shukko. As you did install proxmox in different ways one further question for you. We would like to use the 4 node Supermicro Chassis with an integrated LSI 2208 SAS-controller , see: http://www.supermicro.nl/products/system/2u/2027/sys-2027tr-h72rf.cfm. This chassis does have the option to install a BBU, but the SAS-controller does not have any memory on board. Do you think the lack of memory will have a big impact on performance?
 
Thanks for the feedback Shukko. As you did install proxmox in different ways one further question for you. We would like to use the 4 node Supermicro Chassis with an integrated LSI 2208 SAS-controller , see: http://www.supermicro.nl/products/system/2u/2027/sys-2027tr-h72rf.cfm. This chassis does have the option to install a BBU, but the SAS-controller does not have any memory on board. Do you think the lack of memory will have a big impact on performance?

Memory is good.
here is more info about it:

http://serverfault.com/questions/450242/what-is-the-memory-module-on-a-raid-card-needed-for



The memory is used for read and write cache which improves the performance of the storage. The basic rule when it comes to cache is buy as much as you can afford. The more cache you have the better the performance of the disks will be as data can be written to the memory if the disks aren't in the correct position.
You'll also want to make sure that the RAID card has a battery built on to protect the data in the write cache if there is a power outage.


So this explains everything I guess


 
This is my latest server.

Opteron 8 core cpu
adaptec 6805e 256MB
8xseagate 7200 RPM disks RAID 10
No BBU but write/read caches active
256K stripe size.
64GB ram

pveperf
CPU BOGOMIPS: 32002.08
REGEX/SECOND: 856289
HD SIZE: 19.69 GB (/dev/mapper/pve-root)
BUFFERED READS: 552.91 MB/sec
AVERAGE SEEK TIME: 6.43 ms
FSYNCS/SECOND: 2412.62

server is completely idle at the moment

some other tests:

dd if=/dev/zero of=test bs=64k count=16k conv=fdatasync; unlink test
16384+0 records in
16384+0 records out
1073741824 bytes (1.1 GB) copied, 3.79119 s, 283 MB/s

dd if=/dev/zero of=test bs=1024k count=16k conv=fdatasync; unlink test
16384+0 records in
16384+0 records out
17179869184 bytes (17 GB) copied, 42.5562 s, 404 MB/s

ioping -c10 .
4096 bytes from . (ext3 /dev/mapper/pve-root): request=1 time=0.1 ms
4096 bytes from . (ext3 /dev/mapper/pve-root): request=2 time=0.2 ms
4096 bytes from . (ext3 /dev/mapper/pve-root): request=3 time=0.2 ms
4096 bytes from . (ext3 /dev/mapper/pve-root): request=4 time=0.2 ms
4096 bytes from . (ext3 /dev/mapper/pve-root): request=5 time=0.2 ms
4096 bytes from . (ext3 /dev/mapper/pve-root): request=6 time=0.2 ms
4096 bytes from . (ext3 /dev/mapper/pve-root): request=7 time=0.2 ms
4096 bytes from . (ext3 /dev/mapper/pve-root): request=8 time=0.2 ms
4096 bytes from . (ext3 /dev/mapper/pve-root): request=9 time=0.2 ms
4096 bytes from . (ext3 /dev/mapper/pve-root): request=10 time=0.2 ms

--- . (ext3 /dev/mapper/pve-root) ioping statistics ---
10 requests completed in 9002.8 ms, 5470 iops, 21.4 mb/s
min/avg/max/mdev = 0.1/0.2/0.2/0.0 ms


ioping -RD .

--- . (ext3 /dev/mapper/pve-root) ioping statistics ---
13897 requests completed in 3000.1 ms, 6205 iops, 24.2 mb/s
min/avg/max/mdev = 0.1/0.2/24.7/0.5 ms

ioping -R .

--- . (ext3 /dev/mapper/pve-root) ioping statistics ---
9679 requests completed in 3030.0 ms, 3897 iops, 15.2 mb/s
min/avg/max/mdev = 0.0/0.3/390.7/4.6 ms
 
I know that the "proxmox advice" is to use a good hardware raid controller with a BBU. We prefer however a software raid-1 configuration. The latest Intel SSD disks (i.e. DC S3500 Series) comes with data protection in case of a power loss so a BBU is not really necessary anymore. Can anyone tell me why a hardware raid with BBU (ignore the very little CPU overhead of software raid-1) still has advantage over a software raid?

We had been using Proxmox VE over mdadm RAID10 couple of years back. I would not have recommended it for production purposes back then, as the PVE 1.x kernels were kind of unstable, and we had lost entire arrays several times due to some kernel panic... but now I guess the stability situation is better, the current kernels are not going titsup so often, so it's probably worth a try. It was also lower performing compared to the dedicated hardware controllers.

BBU is overrated for most use cases, as a decent journaling filesystem (like ext3 and ext4) with sensible mount options (ordered data and write barriers) is going to protect against the once in a year power failure, especially if you are in a data center where power is UPS protected. (I'm not sure what happens to the write cache of a RAID card during a hard reset, which is so much more likely than a complete power loss.) So yeah, you may lose some of your write cache if your entire server loses power, but as you are most likely protected on both the filesystem and the application level (like a transactional database over a journaling filesystem), I can't see how a BBU is so indispensable. It's a buzzword that people without real knowledge cling to, so they have a false sense of security. (Of course if you work in a hospital or a bank, your mileage may vary.)

So my advice is: get a simple hardware RAID controller (like the above mentioned Adaptec 6405E / 6805E) for your spinning disks, as it will likely be faster and more reliable than mdraid, not to mention much easier installation / upgrade of the system (until the Proxmox installer includes an md setup/import function). For solid state drives, I would not bother with RAID TBFH, as they have no moving parts they are much less likely to fail than hard drives... but they should work with your Adaptec card as well if you insist on mirroring.

Oh, and don't forget: periodically check your RAID array's consistency, and backup often to physically different servers. A couple of daily and weekly backups kept at different places protect against much bigger disasters than any kind of RAID.
 
Last edited:
Just to be clear, using an SSD, the recommendation then is no RAID?
I need a solution as my SuperMicro Blade System cannot physically house a RAID Card. I need some type of solution that gives me minimal downtime in a crash setting. I will of course have HA available, however, the hosts themselves will be worked pretty hard. I hat to transfer that much stress to a new host for a prolonged period.
Please advise.
Currently, I have a single 250GB with 2 additional 250GB drives in a 4 bay Blade System. Also, I am using OmniOS NFS for my main storage on all VM's, CT's, Backups, and ISOs
 
You are free to use my backup script
Code:
#!/bin/bash


LIST="mnt lost+found media proc sys tmp"


function help {
    echo "backup.sh source destination [exclude list] [-v]"
    echo
    echo -e "\tSource           Path"
    echo -e "\tDestination      Path or [user@]host:[/path]"
    echo -e "\t-v                Be verbose"
    echo -e "\tExclude list     Comma separated list of folders to exclude 'foo bar'"
    echo -e "\tDefault list     $LIST"
    echo
    echo -e "Script does not cross filesystem boundaries"
}


[ $# -lt 1 ] && echo "Missing source" && help && exit 1
[ $# = 1 -a $1 = '-h' ] && help && exit 0
[ $# -lt 2 ] && echo "Missing destination" && help && exit 1
[ $# -gt 4 ] && echo "To many arguments" && help && exit 1


SRC=$1
DEST=$2


if [ $# -eq 3 ]; then
    [ -n $3 ] && LIST="$3"
fi


ARGS="-arXSxh"
if [ $# = 4 ]; then
    if [ $4 = '-v' ]; then
        ARGS="$ARGS --progress"
    else
        echo "$4: Bad argument" && help && exit 1
    fi
fi


EXCLUDE=""
for DIR in $LIST; do
    EXCLUDE="$EXCLUDE --exclude '$DIR'"
done


echo "backing up $SRC to $DEST"
echo "using $EXCLUDE"


rsync $ARGS --stats $EXCLUDE "$SRC" "$DEST"


exit 0
 
Last edited:
I assume your root (/) is on sdb1 so to backup sdb1 you would simply enter / for source. The backup script is working on the file system level so it has no knowledge about disks. The script has no constants which needs to be changed.

A default proxmox will be installed this way:

sda1: /
sda5: /var/lib/vz
hostname: pve1

So to backup system: backup.sh / /mnt/pve/some_nfs_share/pve1

Since backup does not cross file system borders the files in /var/lib/vz will not be backed up.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!