Install on Soft Raid

I get e-mailnotification (/etc/mdadm.conf). With /proc/mdstat I see which one is the faulty disk. The only thing I need to know is where servers are located in the server room, but that's a matter of administration.

yes, email works fine with mdadm (in fact much better than on most raid controllers) and with /proc/mdstat/ you see which one is faulty but how do you map this with physical drive?

opening the case to see the light on the disks, doing dd to find the disk is for me too much risk and also a task which can never be delegated to basic IT staff.
 
yes, email works fine with mdadm (in fact much better than on most raid controllers) and with /proc/mdstat/ you see which one is faulty but how do you map this with physical drive?

opening the case to see the light on the disks, doing dd to find the disk is for me too much risk and also a task which can never be delegated to basic IT staff.

How do you do the above if your hardware controller doesn't present any good/faulty LED to the outside?
 
If a lonely hardware-raid-controller burn then it´s not just "not much more" ;) - In that case the hardware-raid is much worse than a software-raid. If you know what you are doning software-raid can be a good thing. We have a backup of important servers on tape but I never needed it to reconstruct a hole server.

You should definitely try to reconstruct the whole server once ;)
You'll be amazed that these tapes fail only when you really need data from them...
"I thought we had backup, damn!" ;)


Mostly I needed the backup if some user deleted some data by misstake (and there raid does not really help anyway).

Yes, RAID is not backup.
For some people it's the same though (they usually learn after the first major data loss, though).
 
I get e-mailnotification (/etc/mdadm.conf). With /proc/mdstat I see which one is the faulty disk. The only thing I need to know is where servers are located in the server room, but that's a matter of administration.

Yes /proc/mdstat is fine (and is showing you the state when rebuilding the raid). These days (SATA/SAS) maybe somebody is happy to own some (cheap) hotplug-slots too (and have the idea to write some sda, sdb stuff on a sticker)... :D

Greetings,
user100
 
yes, email works fine with mdadm (in fact much better than on most raid controllers) and with /proc/mdstat/ you see which one is faulty but how do you map this with physical drive?

opening the case to see the light on the disks, doing dd to find the disk is for me too much risk and also a task which can never be delegated to basic IT staff.

In my case:

mainboard_sata0 --> sda
mainboard_sata1 --> sdb

/proc/mdstat tells me which disk. It's that simple, at long as the above is true :D
 
You should definitely try to reconstruct the whole server once ;)
You'll be amazed that these tapes fail only when you really need data from them...

You mean all of them? ;) - Why should I go to test this when it just does not work when I need it? But anyway when the hardware-raid-controller fail (yes we use hardware-raid too) and the tapes are all crap you got a chance to start with a brand new ehhrm... fast and "clean" system, right? :D
 
I think people keep forgetting that the I in RAID means Inexpensive (not independent, as it's common to find on the tubes). A hw raid controller + batteries is not inexpensive. For small servers/test machines with only something like 4~7 disks, it's actually quite a percentage of the overall cost of storage.

Also, as for the HDD cache, true, it should be disabled. Also true that in any decent datacenter, you have 2 electric phases (independent lines, dunno the proper term in english), UPS (possibly redundant), backup generators (the generator of the datacenter I'm moving to has an autnomy of around 1 week at 50% load... really!), which translates into quite a low probability for power failues. Most servers also have redundant power suplies... Also, in raid 10, it's still relevant, of course, but not as much as raid 5, so it can be disabled without a big kick in the chins in terms of performance.

Also, I like shinny leds as much as the next guy, but it also depends on cost, as everything mentioned. It's a tradeoff. I just argue for sw raid as a possibility for proxmox and not as a law.
 
If not, the operation system get feedback that data is written but its still in the cache (not protected by a BBU) - in the case of a power failure you loose data and in the worst scenario you cannot start any of your virtual machines on that host.

Hardware RAID:
You need to enable the cache on the hardware raid controller and DISABLE the cache on the hard disk. therefore you still get fast performance due to the fast controller cache and you are not in any risk of loosing data as this cache is protected by the BBU.

Am I wrong?

There seems to be an assumption here that there is no other way of protecting against a power failure than to have a hardware RAID with a BBU. If a server is attached to an uninterruptible power system and has been configured to perform an orderly shutdown when the UPS switches to battery, then would not a software RAID system have just as much opportunity to take care of any cached writes in the process?
 
There seems to be an assumption here that there is no other way of protecting against a power failure than to have a hardware RAID with a BBU.

Your assumption that a power loss is the only possible failure is wrong. There are many dirrerent errors scenarious.
 
just install a basic Debian Lenny (with whatever configuration you want) and on top Proxmox VE with apt-get. we will publish a guide for this soon as its very easy since V1.2.

great, I'd love to see this quick guide with a recommended disk partition, and the detailed instructions to install Proxmox VE, because right now, I can't install it on a rented server with no CD access.

Thanks in advance
 
Quick question... if I install a lenny and then apt-get install proxmox, I'll still have to modify the module lists for the proxmox kernel, right?
 
To sum up, hardware RAID does not protect you much more than software RAID with default Proxmox VE settings (unless I'm mistaken somewhere). And, RAID is no backup replacement ;)

Right. Because software RAID works great in the face of things like RAID 5 write holes. How about boot time data protection, write-back caching?

There's a whole slew of performance issues that are well known for exacerbating write hole issues under high loads.

Anyway, I might agree that software RAID is okay in a pinch. However to claim they have the same reliability is simply false.
 
To sum up, hardware RAID does not protect you much more than software RAID with default Proxmox VE settings (unless I'm mistaken somewhere).

This is wrong, because you do not consider OpenVZ VMs. And as stated before, we will change the defaults for KVM.

- Dietmar
 
Has someone succeeded in installing proxmox over a basic lenny, using apt-get?

What I'm (unsuccessfully until now) trying to do is lenny + raid1 + LVM + proxmox.
 
Just to add to the chorus, originally I was keen on SW raid to be supported in ProxVE, but I understand the reasons for not having the feature there - and support the development team for their decision :-)

This topic has been discussed extensively in the forums already, so not much benefit to re-re-re visit it. Search the forums.

You can get decent econobox hardware raid for $200 Cdn or less (Areca SATA raid true HW controller) so the cost argumet against HW raid is not eitirely 'true' ...

Anyhow. Just a comment. But just to point out, the discussion has happened a number of times already, and based on prior discussion, I doubt there will be any change to SW raid support in ProxVE anytime soon.


Tim Chipman
Fortech I.T. Solutions
http://FortechITSolutions.ca
 
Just to add to the chorus, originally I was keen on SW raid to be supported in ProxVE, but I understand the reasons for not having the feature there - and support the development team for their decision :-)

This topic has been discussed extensively in the forums already, so not much benefit to re-re-re visit it. Search the forums.

well, I might have misunderstood, but I'd swear that since the last version, it should be possible to do an apt-get install of the proxmox system, perhaps you could review this thread to be sure?

You can get decent econobox hardware raid for $200 Cdn or less (Areca SATA raid true HW controller) so the cost argumet against HW raid is not eitirely 'true' ...
please, there is no cost reason behind my question. I already tried Proxmox on a test server using the recommended way (HW Raid and CD Install) and it worked great, that's why I'd like to use it on a *hosted* server, that don't (and never will) have HW raid, and no access to the CD tray.

Anyhow. Just a comment. But just to point out, the discussion has happened a number of times already, and based on prior discussion, I doubt there will be any change to SW raid support in ProxVE anytime soon.
ok, let me rephrase: I'd like to install proxmox over a basic Lenny and LVM, using *no* CD, and not talking about SW Raid.

Is that better now, or should I open another topic for that, since I understand I'm probably in a wrong one, now that I don't talk about SW Raid?
 
I understand the reasons for not having the feature there

If you understand the reasons can you please tell me? I still don´t know a proxmox specific reason. I read some erlier diskussion too and still don´t know. In my opinion the arguments regarding "why not sw-raid" sounds a little bit more like a generic "SW vs HW raid" diskussion at all. Should SW raid dropped from GNU/Linux itself totally because you don´t have a led on the front of your server afterwards or a powerloss (without UPS) or a defect power-supply of the server can cause troubles?
Yes it´s true SW-raid can be worse in some case. But it´s even true that HW-raid can be worse in some case and additional cost money. If your "$200 Cdn or less" controller dies the server would stop (crash hard) if you don´t have another "$200 Cdn or less" controller with additional ???$ harddrives in your server. If that server is down and you get another "$200 Cdn or less" controller it´s possible that you get in trouble regarding firmware versions. So in both cases you are not on a 100% safe side and should make your periodical backups.


Greetings,
user100
 
Has someone succeeded in installing proxmox over a basic lenny, using apt-get?

What I'm (unsuccessfully until now) trying to do is lenny + raid1 + LVM + proxmox.

yes, thats pretty easy. just add our repo to your sources.list, add the repo key, and run apt-get install proxmox-ve.

where do you have a problem? whats the issue?

(If you want to use LVM snapshots you need to configure LVM properly.)
 
I'm just confused about a few things, because I don't feel enlightened enough about the whole "proxmox not supporting SW RAID"... people keep bringing up the prob of performance but then argue about RAID 5. That seems akin to comparing the top speed of diferent tractors. Ok, so the hw one is faster... bu it's still slow as hell, specially when degraded.

Also, it's true that power loss isn't the only failure a server may suffer that could cause data corruption, but if HW RAID is better to minimize that, a decent UPS, redundat PSUs with different phases, a good backup policy, etc, etc, can make the extra protection hw raid give meaningless. RAID is not backup. RAID is not redundandy (for services, I mean). RAID is nowhere near perfect. If you read up on ZFS, you can see the ton of design flaws inherent to raid's design and specifications. It's a CHEAP (or it should be) way to make sure that the loss of a disk doesn't bring a server down or destroys data. The key part, for me, it's cheap. With €200, I can buy, for instance, a pair of decent 500GB sata disks. With that, I can use (assume a small server) raid 10 instead of RAID 5. Faster and more reliable for sure. Cheap too. Makes the inexpensive part of RAID somewhat more acurate.

If I can have HW raid, I'll have it. If I can't, because I have no budget available or have something akin to 100 servers and the loss of one is just an inconvinient, I'll choose SW. At least if a HDD fails, the server doesn't. Better than the alternative. In either case, the difference in performance is not relevant.

But please, feel free to ignore this reply. I'm sure hw raid is the only answer that actually makes sense.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!