Install on Soft Raid

SoftwareRAID is a very acceptable solution and not a bad idea at all.

Just one example: You don't have a battery for RAM, so there is a very high risk of loosing data when something unusual happens. HW Raid controller comes with a battery for there cache.

- Dietmar
 
Just one example: You don't have a battery for RAM, so there is a very high risk of loosing data when something unusual happens. HW Raid controller comes with a battery for there cache.

- Dietmar

just to add my 2 cents:

hard disks also include cache memory which is not protected by batteries backup. so you need to disable the hard drive cache.

therefore, if you have soft raid, you have no hard drive write cache at all - bad performance.

if you have hardware raid you can use the batteries protected cache from the controller (do not forget to disable the cache from the hard disks).

consumer hard disk has the cache enabled, server disk sometimes too. so if you want a stable and fast system, pls double check the cache settings.

I assume a lot of server problems around (not only on Proxmox VE, also in the windows world) are caused by not protected disk cache memory and powerloss.
 
You don't have a battery for RAM, so there is a very high risk of loosing data when something unusual happens. HW Raid controller comes with a battery for there cache.

You don´t have a battery on a cheap HW-Controller too. If you just use Raid1 (which is enough in many cases where expensive hardware is not available) you can forget this hw-controller-battery-stuff in my opinion anyway. We are talking regarding a desaster here. Right? But raid is no real backup? Isn´t it? So even if things break down and even you have a well working HW-Raid-Controller you should have some tape-backup to restore your machine. :)

Greetings,
user100
 
You don´t have a battery on a cheap HW-Controller too. If you just use Raid1 (which is enough in many cases where expensive hardware is not available)

I am not suggesting hardware raid controller without cache. cache and batteries backup is essential and the only we can recommend.

you can forget this hw-controller-battery-stuff in my opinion anyway. We are talking regarding a desaster here. Right? But raid is no real backup? Isn´t it?

Raid has nothing to do with backup. do not compare apples with oranges.

So even if things break down and even you have a well working HW-Raid-Controller you should have some tape-backup to restore your machine. :)

Greetings,
user100

you got it. Raid AND tape backup is essential.
 
just to add my 2 cents:

hard disks also include cache memory which is not protected by batteries backup. so you need to disable the hard drive cache.

therefore, if you have soft raid, you have no hard drive write cache at all - bad performance.

if you have hardware raid you can use the batteries protected cache from the controller (do not forget to disable the cache from the hard disks).

consumer hard disk has the cache enabled, server disk sometimes too. so if you want a stable and fast system, pls double check the cache settings.

I assume a lot of server problems around (not only on Proxmox VE, also in the windows world) are caused by not protected disk cache memory and powerloss.

Come on guys. With such arguments you can tell the kernel-maintainers to whipe out SW-Raid from linux-kernel totally. Nobody said SW-Raid is much better then HW-Raid. But SW-Raid is a good thing in many cases (not in every case). We use both, software-raid and hardware-raid sollutions for years now. And I like SW-Raid. So I put Proxmox on SW-Raid1 manually too and would not come whine to you if it´s not working okay? ;)

Even on a fileserver where all users may write (that´s not always the case) software-raid can work well. On a fileserver that is put to 100Mbit-LAN where just a few may write the write-cache would not make your LAN-bandwith higher. On a webserver, development-server, print-server, ... in a slow LAN or Internet (!) SW-Raid is fast enough in much cases. And independed of that somebody may risk to enable write-cache even on SW-raid because the server is not that critical (there may be others where it matter a little bit more).

Greetings,
user100
 
Raid has nothing to do with backup.

Yes I know. So you are not on the save side if you use HW-Raid (with battery). It´s just a little bit more secure in some cases and can make troubles too in other cases. For example if you would use your old server a little bit longer, your controller dies and you should buy lot of money to get a "new" one. Buying a simple-HD is normaly not so difficult. But anyway you can migrate to a newer server easy. It´s just an example.

Greetings,
user100
 
Yes I know. So you are not on the save side if you use HW-Raid (with battery). It´s just a little bit more secure in some cases and can make troubles too in other cases. For example if you would use your old server a little bit longer, your controller dies and you should buy lot of money to get a "new" one. Buying a simple-HD is normaly not so difficult. But anyway you can migrate to a newer server easy. It´s just an example.

Greetings,
user100

this discussion makes no sense for me, you can install whatever you want.
 
Just one example: You don't have a battery for RAM, so there is a very high risk of loosing data when something unusual happens. HW Raid controller comes with a battery for there cache.

Not all HW RAID controllers come with a battery.

Also, in case of a power failure, the risk of loosing data is the same if you use:
- no RAID at all
- Linux software RAID
- hardware RAID with no battery backup

Also, disabling write-cache in your disks just eliminates that risk at a slight cost of performance if you write a lot (not sure if virtualization is a good idea for a very high I/O bound tasks anyway).
 
I don't understand why it's a bad idea.
I know how manage a soft raid with mdadm ...

I just want to know how enable it.
How OVH setup proxmox in RAID on their dedicated server.

If not, what is the type of filesystem i need to use on my /var/lib/vz if i put just this partition on my raid

I guess there are some misunderstandings about using software RAID here.

Anyway, if the installer doesn't offer that option, you do it "low level" - this method works for all distros and all setups ;)

Do you want to use RAID for everything, or just for /var/lib/vz?

If just for /var/lib/vz, then:
- stop everything accessing /var/lib/vz
- create RAID, make a filesystem
- mount /var/lib/vz on your new /dev/mdX device, copy old data, change /etc/fstab entries

If you want RAID for everything, you have to do the above, but boot from a live CD to do it.
 
I just got this working tonight...is it ok if I post "use at your own risk instructions" or would the mods rather I not post it?

I'm new here, which is why I'm asking :D
 
I've been gathering info on RAID, as I need to get a new NAS that'll accommodate different types of data including databases and user homes (they have to be on the same machine, budget restrictions). First and foremost, RAID is old, obsolete and should die. Still, there are few other solutions and RAID is the most common by far, so we have to live with it.

Now, in regards to sw vs hw RAID, the argument isn't a simple one.
It's not a matter of "sw raid is innapropriate for a production evnioronment". If you ever lost a controller, had to wait 2 weeks to get oa new one and hunt down older firmware because the more recent one doen't recognize your array, you'll be wishing you had sw RAID, I can promisse you.
It's not only performance either, because a good hw RAID can give most sw implementations a licking... but not always.
It's not cache + batteries either (oh, also, batteries die), because lack of cache is a real problem with RAID 5 for instance... not so much for, say, RAID 10. Also, if you say stuff like sw RAID isn't fit for procduction and use something like RAID 5... but I digress (and bashing RAID 5 is beating up a dead horse)

It is, mostly, a matter of cost. True, hw RAID can be better for some applications, just as sw RAID is better for others. There are a huge number of discussions on the matter online as to which applications should use which implementation. Still, when I am installing a system on a machine that has no RAID card, I like to be able to rely on "good" old mdadm. For instance, on my test server. When I use proxmox on such a machine, I'm gonna have to waste a couple of hours just to get a raid setup, copy the info to it, resize LVMs and fs, etc, and I'm gonna end up with a config that you recomend against. I recognize the risks you presented, I evaluated the and accepted them... just like I did when I chose proxmox in the first place. The difference is, I had way more trouble then I should, just because you chose to impose the no sw RAID rule. It's your perrogative, I know, and proxmox saved me quite some work later on, but why should't it also have given me the liberty to choose for myself wether or not I wanted sw RAID? I don't want it out of spite, I need it because I have no other alternative.

So, please, consider including this on your standard install. Label it an "expert" install option if you have to, but please, make this wonderful tool even better by including something many will surely find useful.

In the meantime, I'll be in the corner fiddling with Solaris and ZFS. Now there's a decent raid-like implementation :D
 
I've been gathering info on RAID, as I need to get a new NAS that'll accommodate different types of data including databases and user homes (they have to be on the same machine, budget restrictions). First and foremost, RAID is old, obsolete and should die. Still, there are few other solutions and RAID is the most common by far, so we have to live with it.

Now, in regards to sw vs hw RAID, the argument isn't a simple one.
It's not a matter of "sw raid is innapropriate for a production evnioronment". If you ever lost a controller, had to wait 2 weeks to get oa new one and hunt down older firmware because the more recent one doen't recognize your array, you'll be wishing you had sw RAID, I can promisse you.
It's not only performance either, because a good hw RAID can give most sw implementations a licking... but not always.
It's not cache + batteries either (oh, also, batteries die), because lack of cache is a real problem with RAID 5 for instance... not so much for, say, RAID 10. Also, if you say stuff like sw RAID isn't fit for procduction and use something like RAID 5... but I digress (and bashing RAID 5 is beating up a dead horse)

It is, mostly, a matter of cost. True, hw RAID can be better for some applications, just as sw RAID is better for others. There are a huge number of discussions on the matter online as to which applications should use which implementation. Still, when I am installing a system on a machine that has no RAID card, I like to be able to rely on "good" old mdadm. For instance, on my test server. When I use proxmox on such a machine, I'm gonna have to waste a couple of hours just to get a raid setup, copy the info to it, resize LVMs and fs, etc, and I'm gonna end up with a config that you recomend against. I recognize the risks you presented, I evaluated the and accepted them... just like I did when I chose proxmox in the first place. The difference is, I had way more trouble then I should, just because you chose to impose the no sw RAID rule. It's your perrogative, I know, and proxmox saved me quite some work later on, but why should't it also have given me the liberty to choose for myself wether or not I wanted sw RAID? I don't want it out of spite, I need it because I have no other alternative.

So, please, consider including this on your standard install. Label it an "expert" install option if you have to, but please, make this wonderful tool even better by including something many will surely find useful.

In the meantime, I'll be in the corner fiddling with Solaris and ZFS. Now there's a decent raid-like implementation :D

as soon as we got the flexible storage model you can add additional storage for your VM´s (remote and local) in an easy way.
 
I know, it's one of the features I'm anxious to try (and been holding off doing it "by hand" because I wanna test this). Still, it doesn't solve the problem of not having sw raid for the operating system, for instance, or, for a more particular case, it still makes me spend some time extra trying to get raid going on a couple of old servers I manage that don't have hw raid (and btw, having sw raid on them has saved my butt a couple of times).

I'm glad to see proxmox coming along and when I'll get more free time (too swamped right now), I'll gladly contribute again (hopefully with code instead of just translations) btw. Still, please consider including sw raid on the install. Make a poll or something, just to see what the community feels about it.
 
Last edited:
If you ever lost a controller, had to wait 2 weeks to get oa new one and hunt down older firmware because the more recent one doen't recognize your array, you'll be wishing you had sw RAID, I can promisse you.

I understand you. :D


Still, when I am installing a system on a machine that has no RAID card, I like to be able to rely on "good" old mdadm. For instance, on my test server.

Me too. Even if I have a machine with a "pseudo-hw-raid-controller" I prefer sw-raid with mdadm. Just if there is a real hw-raid I would use it.


I recognize the risks you presented, I evaluated the and accepted them...

And there is still a risk to use hw-raid-controllers. They have some advantages in some cases but it´s hardware that can die. You are not the only one that know some story regarding firmware. So that´s not just theory. It does not matter if you have 5 healthy well working harddrives in a server - all of them would stop functioning (hard) if they used that broken controller. So in that case it would be better to have a software-raid with one broken harddrive (and a fully charged UPS) then a server with one lonely broken hw-raid controller. Of course you can plug in more then one controller and additional harddrives too - but that all cost money. A broken harddrive is some defect hardware and that´s not really good but in comparison to a lonely broken hw-raid-controller chance is good that you can still work with that server.


So, please, consider including this on your standard install. Label it an "expert" install option if you have to, but please, make this wonderful tool even better by including something many will surely find useful.

Yes, please proxmox-team be nice and give us sw-raid! It´s possible to do that sw-raid manually but if you installed already some linux-distribution with sw-raid in the past that issue feels a little bit... uhm... curious.


Greetings,
user100
 
Bounty

Just thought I'd weigh in on this issue because it hasn't been completely beaten to death yet.

I think Proxmox VE is almost perfect. Fast, light and non-proprietary. The migration feature is incredible (haven't tried it yet as I only have one VE server running). To make it perfect, and I'm speaking for me only, would be the inclusion of softraid support.

I'm a fan of softraid, I use it on all of my servers. I've had much better luck with performance and recovery than with hardware solutions. IMHO the more hardware, the more points of failure.

As one of the Proxmox developers mentioned, they heard lots of people bitching about the lack of softraid but no one was willing to pony up and kick in a little green.

Well, I'm willing. Please respond to this thread if you willing to contribute to a bounty for softraid support. Proxmox, please give us a reasonable goal that would make it worth your while. And let's see what happens.
 
I fully agree with lborkey, Proxmox VE is almost perfect, and the inclusion of software-raid would make Proxmox VE the perfect solution. From a development point of view however, I can understand that proxmox will not support software-raid. So first of all, proxmox: great work!!!

I think most of the people would like to see a software raid-1 solution as some kind of "basic protection". For other raid-levels (and performance) they should buy a hardware-raid card. By the way, 3ware 9650 and 9690 serie works with proxmox :D

Why do I need a software raid-1 solution: We manage > 100 servers and we like to add a openvz virtualization layer on each server (1 container per server). With this setup it's much easier in the future to do migrations and upgrades. All these servers currently run centos5 with software raid-1. Adding a good hardware raid-controller + BBU on each server will cost approx. 50.000 euro and unfortenately that's not an option :( That's the reason (for us) why software raid-1 would be perfect.

I like to stich with proxmox because the graphical interface is very well done, but without software raid-1 setup it's a no go. Does anyone know a webbased management tool for openvz which does support software raid-1?

Gijsbert
 
I fully agree with lborkey, Proxmox VE is almost perfect, and the inclusion of software-raid would make Proxmox VE the perfect solution. From a development point of view however, I can understand that proxmox will not support software-raid. So first of all, proxmox: great work!!!

I think most of the people would like to see a software raid-1 solution as some kind of "basic protection". For other raid-levels (and performance) they should buy a hardware-raid card. By the way, 3ware 9650 and 9690 serie works with proxmox :D

Why do I need a software raid-1 solution: We manage > 100 servers and we like to add a openvz virtualization layer on each server (1 container per server). With this setup it's much easier in the future to do migrations and upgrades. All these servers currently run centos5 with software raid-1. Adding a good hardware raid-controller + BBU on each server will cost approx. 50.000 euro and unfortenately that's not an option :( That's the reason (for us) why software raid-1 would be perfect.

I like to stich with proxmox because the graphical interface is very well done, but without software raid-1 setup it's a no go. Does anyone know a webbased management tool for openvz which does support software raid-1?

Gijsbert

just install a basic Debian Lenny (with whatever configuration you want) and on top Proxmox VE with apt-get. we will publish a guide for this soon as its very easy since V1.2.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!