In my opinion the single biggest missing feature from Proxmox VE is easy array creation and installation to Linux software RAID.
There are many reasons:
1. HW RAID is best for reliability, but do we really need that?
HW RAID was created for ultimate reliablility: if you have a card with a RAID processor and BBU cache, it will ensure array and filesystem consistency in the event of a power failure or kernel panic. But Linux is very stable, we haven't seen a kernel panic for years now (apart from the recent 2.6.32-6 issue
), and servers are colocated in data centers with UPS, so power is never lost. Additionally, Proxmox VE supports snapshot backups and high availability cluster mirroring, so if you have all these turned on, you could even use single disk nodes without fear of data loss.
2. HW RAID is a black box to the Linux IO subsystem, killing performance
Proxmox VE (Linux 2.6.32) uses the CFQ IO scheduler by default, which in theory supports IO priorities. In reality however it is so badly performing on HW RAID controllers that many users (including us) have since switched to DEADLINE or NOOP schedulers (and we see much better performance during high IO). The reason is simple: when using a HW RAID controller there is a an IO queue on the card that knows about the disk layout but nothing about the kernel queue, and of course CFQ in the kernel which has no idea how the blocks are laid out physically. These two queues are basically working against each other, reordering requests several times in both places. Needless to say CFQ fully supports Linux SW RAID, so it can optimize IO effectively when it knows about the disk layout.
3. HW RAID is inflexible, incompatible and has no performance advantage
If your expensive RAID controller dies, usually you have to find an exact same model, because arrays are not portable between manufacturers (and sometimes not even within the same company). With software RAID, if your motherboard dies you can buy a different model, Linux will recognize your array without any problems.
Also you can't easily configure SSD caching (or partitions) for your HW RAID array (or you have to buy the highest priced card), while it's much easier with software.
RAID controllers don't have frequent firmware updates and community discussions, while common motherboard chipsets do.
Last but not least: only the newest HW RAID controllers can keep pace with SATA 6Gb/sec speeds, so it's entirely possible that a 2-3 year old RAID controller is slower than SW RAID on a modern simple PC.
4. HW RAID is expensive
A decent controller that supports parity-based arrays can cost as much as a single uniprocessor server, or should I say another node to your cluster. Which would you choose if there would be the option of SW RAID? I would rather spend it on a new node. (Google and Facebook do the same: they use cheap nodes and ensure data consistency in software, they don't rely on expensive proprietary hardware.)
I hope I explained myself clearly: I think HW RAID is an aging, expensive technology that is becoming irrelevant for the open-source cloud.
Hopefully the Proxmox devs will agree and enable us to create SW RAID arrays in PVE 2.0.
There are many reasons:
1. HW RAID is best for reliability, but do we really need that?
HW RAID was created for ultimate reliablility: if you have a card with a RAID processor and BBU cache, it will ensure array and filesystem consistency in the event of a power failure or kernel panic. But Linux is very stable, we haven't seen a kernel panic for years now (apart from the recent 2.6.32-6 issue

2. HW RAID is a black box to the Linux IO subsystem, killing performance
Proxmox VE (Linux 2.6.32) uses the CFQ IO scheduler by default, which in theory supports IO priorities. In reality however it is so badly performing on HW RAID controllers that many users (including us) have since switched to DEADLINE or NOOP schedulers (and we see much better performance during high IO). The reason is simple: when using a HW RAID controller there is a an IO queue on the card that knows about the disk layout but nothing about the kernel queue, and of course CFQ in the kernel which has no idea how the blocks are laid out physically. These two queues are basically working against each other, reordering requests several times in both places. Needless to say CFQ fully supports Linux SW RAID, so it can optimize IO effectively when it knows about the disk layout.
3. HW RAID is inflexible, incompatible and has no performance advantage
If your expensive RAID controller dies, usually you have to find an exact same model, because arrays are not portable between manufacturers (and sometimes not even within the same company). With software RAID, if your motherboard dies you can buy a different model, Linux will recognize your array without any problems.
Also you can't easily configure SSD caching (or partitions) for your HW RAID array (or you have to buy the highest priced card), while it's much easier with software.
RAID controllers don't have frequent firmware updates and community discussions, while common motherboard chipsets do.
Last but not least: only the newest HW RAID controllers can keep pace with SATA 6Gb/sec speeds, so it's entirely possible that a 2-3 year old RAID controller is slower than SW RAID on a modern simple PC.
4. HW RAID is expensive
A decent controller that supports parity-based arrays can cost as much as a single uniprocessor server, or should I say another node to your cluster. Which would you choose if there would be the option of SW RAID? I would rather spend it on a new node. (Google and Facebook do the same: they use cheap nodes and ensure data consistency in software, they don't rely on expensive proprietary hardware.)
I hope I explained myself clearly: I think HW RAID is an aging, expensive technology that is becoming irrelevant for the open-source cloud.
Hopefully the Proxmox devs will agree and enable us to create SW RAID arrays in PVE 2.0.
Last edited: