Install on Soft Raid

Martin just published a new wiki page about installing Proxmox VE on Debian Lenny:

http://pve.proxmox.com/wiki/Install_Proxmox_VE_on_Debian_Lenny

thanks a lot, that works perfectly over a basic debian Lenny.

I still have problems installing over an LVM based system though, but that should be easy to solve.

Perhaps you could give suggestions as how to partition the system before installing pve (I know this is not a supported install)?

Anyway, thanks again, that's a great help!
 
I'm just confused about a few things, because I don't feel enlightened enough about the whole "proxmox not supporting SW RAID"... people keep bringing up the prob of performance but then argue about RAID 5. That seems akin to comparing the top speed of diferent tractors. Ok, so the hw one is faster... bu it's still slow as hell, specially when degraded.

Incorrect. This might indicate a problem with your controller, or perhaps simply cheap hardware. There are plenty of raid controllers that handle rebuilds without slowing down.

Also, it's true that power loss isn't the only failure a server may suffer that could cause data corruption, but if HW RAID is better to minimize that, a decent UPS, redundat PSUs with different phases, a good backup policy, etc, etc, can make the extra protection hw raid give meaningless. RAID is not backup. RAID is not redundandy (for services, I mean). RAID is nowhere near perfect. If you read up on ZFS, you can see the ton of design flaws inherent to raid's design and specifications. It's a CHEAP (or it should be) way to make sure that the loss of a disk doesn't bring a server down or destroys data. The key part, for me, it's cheap. With €200, I can buy, for instance, a pair of decent 500GB sata disks. With that, I can use (assume a small server) raid 10 instead of RAID 5. Faster and more reliable for sure. Cheap too. Makes the inexpensive part of RAID somewhat more acurate.

This is fairly misleading. Sun has never advertised ZFS as a hardware raid replacement. Most of their complaints about RAID's have been regarding write holes in RAID 5 which is a non-issue in some controllers. They still advocate hardware raid in production servers... along with ZFS.

If I can have HW raid, I'll have it. If I can't, because I have no budget available or have something akin to 100 servers and the loss of one is just an inconvinient, I'll choose SW. At least if a HDD fails, the server doesn't. Better than the alternative. In either case, the difference in performance is not relevant.

But please, feel free to ignore this reply. I'm sure hw raid is the only answer that actually makes sense.

Perhaps if you have something that's important enough for 100 servers then it's important enough for hardware raid. If your budget doesn't allow it then I'd take less servers in exchange for more with hardware raid in them. If on the other hand they're unimportant and are merely an inconvenience when they go down as you say; then perhaps you don't need raid in them at all.
 
There are controllers that handle rebuild without slowing down... further, I guess. You will have to read from all of the disks to get the info you need, right? You need to calculate the parity (XOR) of all the disks to get the missing info, or am I wrong? My knowledge of raid is limited, I know, but I think this is how it works.
You got 5 disks, 4 of them being usable and 1 for parity, like so
a:1
b:2
c:3
d:4
p:1+2+3+4 = 10
(take + as XOR or something, I'm keeping it simple)
lets say b fails. To read something from B, you'd calculate p -(a+c+d), so you'd have to read from all of the disks. In actual RAID 5, since you use XOR, you just calculate the XOR again, but that's besides the point. This seems to have some influence on the performance, no? Are there controllers that bypass this issue?

ZFS was not announced as a HW RAID replacement, no. There are even user cases (on thumpers, with 48 HDD) that use HW RAID + ZFS. Still, you can do a pretty good job of replacing SW RAID with ZFS, methinks. I'm still to find a decent performance compairison between sw raid, hw raid and raidz, for instance, but from what I've seen, it's not bad. I'm not sure, but I think that their 7000 series doesn't use HW RAID, onlyy RAIDZ and ZFS mirroring, right?

On the 100 server scenario, I agree it's a bit extreme and you're right either way. But let me just run this scenario. I have 100 servers (hipithetically). They each use up about 20GB ofHDD space, commodity hw. I get a pair of 80GB SATA drives for peanuts. I set up RAID 1 atop of that, ensuring a bigger MTTF. Even if the downing of a server is a minor nuisance, it's a nuisance non-the-less. With this solution, I get a bit more data security and less potential nuissances for about what, €20/server? I might be interested in that.

Also, the whole HW vs SW RAID, which is better is a whole 'nother ballpark.

(BTW, can you please gimme info on decent raid controllers and aditional advantages they might have?)
 
Hi,

Brief response - I haven't been following this thread closely (busy week).

- I think the main decision for ProxVE to not support SW raid as an out-of-box config, is that it creates the potential for added complexity and for more user support issues. The ProxVE team is not a large one (at least that is my understanding) and they do not have unlimited resources to throw at this project.

So it becomes a matter of putting together a product (ProxVE) in a config which meets their own needs (remember - ProxVE as a virtualization platform IS put together by a company, to meet their own needs - as the first priority; providing an amazing open-source virtualization platform to the community is a secondary issue; a very nice 'fringe benefit' for sure; but first and foremost, it is there for another specific reason. And that specific reason does not include support for each and every user-request feature in the entire community at large.)

At least this is my interpretation. Folks from ProxVE may disagree with me, if they feel I am misunderstanding things. But certainly as someone who runs a small I.T. related business myself, I am *very* aware of the fact that there are only 24 hours in a day, 7 days in a week; and that at the end of the day, if you need to get paid occasionally for doing work, it is nice for that to actually happen now and then to make life run smoothly. You can't spend all of your time doing "good things" just "for fun" because they are good ideas / of interest / etc.


The fact that the newer (1.2 and on) release of ProxVE can be installed on a stock debian install as a starting point - rather than using the ProxVE install media to setup a "virtualization appliance" on top of a bare metal system as the starting point -- will likely mean that 'fairly soon', someone who is motivated -- will figure out precisely how to install ProxVE on top of a software raid configuration, then document their work and make this doc available to the community at large. Of course it will be an 'unsupported config' so not a major support liability concern for the ProxVE development team, which is a good thing IMHO.


In terms of issues of HW vs SW raid ( as religious or philosophical topics) - those are best discussed elsewhere :-). For sure I would always advocate having cold spare hardware on-hand if you have gear in production and concerns for the viability of the gear or depending on how critical your deployment is. (ie, in case of HW failure of the raid controller, on a production system). Typically my experience is that folks who are so concerned about econo-box servers are ultimately less concerned about 99.9999% uptime; although this isn't to say that you can't have good uptime gear / nor use 'good operating practices' when deploying more cost effective hardware; It is more that often the mindset of "save money at all costs" is not concurrent with "best practices / prudent planning in advance / documented procedures / etc".

Anyhow. I'm surely rambling now, so will stop writing. But I hope this helps clarify slightly what I meant on this topic :-)



Tim
 
I have 3 servers with hardware raid, but I can understand that people like to be able to install it on software raid.

If you have 2 servers (+ backup server) with software raid and the disk of the first server breaks down you get a email from mdadm (or another tool) and you can migrate (live migration) the vm's to the other server. Fix the disk problem and migrate the vm to the other server again.
(VM's might get a bit slow if you have a lot of vm's on both servers on one now. But no downtime.)

Most people who use software raid do not have a large server farm but are small users. For the price they have to pay for a good hardware raid card they can almost buy a cheap second server witch gives them more security and uptime than the hardware raid card I think.

That's my thought about this subject :).
 
Last edited:
Sorry to steer this conversation away from the main issue one little bit, but there are a couple of interesting benchmarks about RAIDZ and mirrorring on ZFS vs HW RAID that may be relevant to what was discussed, in terms of performance:

http://milek.blogspot.com/2006/08/hw-raid-vs-zfs-software-raid.html
http://milek.blogspot.com/2006/08/hw-raid-vs-zfs-software-raid-part-ii.html
http://cmynhier.blogspot.com/2006/05/zfs-io-reordering-benchmark.html
http://cmynhier.blogspot.com/2006/05/zfs-benchmarking.html
 
Hi,

That thread is a bit old about HW raid vs SW (ZFS) - no ? The HW raid controller being discussed is 'quite dated' (sun 3510).

Again to reiterate, I think the sw vs hw raid issue - is not about performance, it is about management decisions.

ZFS is very interesting - there is no disputing that. (as an aside though - I'm still a bit paranoid about it, though, given some of the well documented 'total data loss' incidents with it in the past few years).

If you really want to use ZFS with ProxVE, you can.

- setup a 'storage appliance' running opensolaris with ZFS filesystems for data/export
- export these via protocol of your choice (iSCSI, NFS, even AoE if you want)
- mount the data store on your proxve host (manual config of course, initiator/client install and config 'by hand') and you are laughing

Of course, all your data goes through a pipe now (10gigether, gigether, trunked gigether, whatever) so likely you lose some performance/throughput..
but you have high capacity affordable bulk storage (*affordable = hardware costs, not counting time involved to set it up :-)

Tim
 
I know, can't seem to find a more recent benchmark... liked the one that compares FS, though. Can't actually replicate the tests due to hw and/or time constraints, so, must rely on what the Internet provides :D Also, I had actually considered using such a setup, but just for testing. If I get around to it (way too busy atm), I'll post up info. also, the ZFS recovery issues that you mentioned and total data losses are, along with the no-shrink-spool thingies, my only beefs with ZFS. Can't actually find recovery guides, which is something I find unconfortable.

In any case, I only pointed ZFS' RAIDZ as proof that a SW implementation can be good enough to be used in production enviorenments, performance-wise. Of course there may be way better HW controllers, but they come with a cost. In any case, I still maintain that, if i can't have a hw card, I'll take sw RAID1 over a single disk anytime. Also, and I don't have the benchmarks to prove it but I'm willing to bet that a HW RAID 5 is, in terms of performance, overall slower that a SW RAID 10. Also, probably more prone to failures. If I think about it in terms of pricing, I may be more inclined into buying more disks than buying a HW card (although I go for HW RAID whenever possible - best practices and all, but the conomy isn't helping atm), depending on the usage.

Basically, i just think SW RAID on a baremetal proxmox install would be nice :D
 
Hi Neoscoprio,

just a few footnotes on your comments, since I wonder if there is any ambiguity still (?) in your interpretation of my prior posts,

- I don't think for an instant that simple performance or reliability are significant reasons to choose HW raid over SW raid. (Possibly 'breadth of features' - such as "I need a raid volume spanning 16 drives" - but that is a different issue or 'I need background scrubbing / scanning for bad blocks on my raid volume').

- Based on my past experience, I will not ever give a client the option to deploy a production server without some form of raid disk redundancy, period. I won't work on a project if the budget is so tight that there is no money for this. Software raid is cheap and easy and effective. In cases where SW raid isn't easily supported (such as proxVE) then hardware raid alternatives exist which are only marginally more expensive than SW raid.

- I agree that having SW raid support in the ProxVE bare metal install would be 'nice', but we already know this isn't going to happen. The proxVE development team are very clear about their preferences, and it is ultimately their decision. At some point, someone will clearly document the manual install approach for ProxVE onto a SW raid LVM debian bare install, and then people who choose to go this route may do so. But it will introduce more 'issues' for maintenance in the longer term, I think.


so. just trying to clarify.


Tim
 
Ok, since reliability and performance were the main reasons mentioned in the thread, I thought it was on of your arguing points. I agree with you on the extra features.

I like your policy and fully agree. Still, one note on the whole "marginally more expensive", depends on the margin. there are "HW" cards out there that are the raid equivalent to the old AC'97 soft modems. Those are far worse than a plain SW install.

I respect the dev team's decision, I just wanted to contribute to a healthy discussion on the matter. I like proxmox a lot and would like to see it grow. Since I know this is mid-level user oriented (and the interface doesn't really require advanced skills from the admin, kudos for that), I thought that a simple way to have sw raid could help. I have both hw or sw raid on several servers, I don't actually need the team to implement this (although it'd save me some work). It's just that I think it'd help as a selling point...

In any case, I much enjoyed this discussion with you, specially when it relates to ZFS, something which I'm just learning about so, if you want, we can carry on this discussion on via PM or something.
 
Can you have a proxmox system like this

1 Sata = OS Drive

Software Raid Drives = Drive for VEs?

Thanks,

I would like to know if this is possible however using Hardware RAID 1 instead.

I am using an INTEL RAID CARD for my system.

I would like to know as currently it doesn't seem like all drives are being picked up successfully.

Any help will be greatly appreciated.
 
If all drives are not being picked up correctly than I think you RAID card i software RAID, en not hardware RAID.

So check if your card i hardware RAID.

Is it on the motherboard? (software raid)
 
our kernel does not use the update-initramfs tolls and is thus incompatible with that (each kernel update will break your modification).

Still with the 1.3 version? or now we are able to use proxmox kernel with swraid without problems in kernel update?
thks
 
Still with the 1.3 version? or now we are able to use proxmox kernel with swraid without problems in kernel update?
thks

I have software raid working under the latest version of proxmox just to get two drives to show up as the one volume.

The Intel RAID on that machine of mine seems to be either host/BIOS raid and not hardware.
 
I have software raid working under the latest version of proxmox just to get two drives to show up as the one volume.

The Intel RAID on that machine of mine seems to be either host/BIOS raid and not hardware.
So you have bios raid (dmraid) work with debian, and with the proxmox 1.3 kernel, thats true? you use dmraid? or an intel driver? i think that intel drivers are only for rhel or novell.
thks
 
So you have bios raid (dmraid) work with debian, and with the proxmox 1.3 kernel, thats true? you use dmraid? or an intel driver? i think that intel drivers are only for rhel or novell.
thks

I did not use the Intel RAID.

I used Debian Software RAID. If you want me to check how I set it up I can check later just currently need to head off to work.
 
There is no debian software raid ;-) The question is if you use mdadm or the device manager (dmraid) software raid implementation.
Dietmar, may question was:
"Still with the 1.3 version? or now we are able to use proxmox kernel with swraid without problems in kernel update?"
I will be grateful if you can asnswer it.
thks
 
Hardware RAID is better but I agree with drdebian software RAID isn't always a bad idea. In most cases it's better than no RAID at all. A decent RAID controller will set you back about 500 bucks and that's not always an option. It makes life a lot easier though.
A cheaper alternative could be to boot your system from something that doesn't break as much as a hard disk, like a compactflash card or a small SSD. Then setup software RAID for the hard disks.
If the computer is on a UPS a dedicated battery for the cache isn't necessary and it could also be in the RAM.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!