mdadm

May 22, 2016
596
19
18
36
Is totally impossible to add support for mdadm in PVE?

You don't have to support mdadm configuration with yout tech support, but only allowing the usage of a software raid different than ZFS. Something like allowing the creation of a RAID-6 with LVM on top of it.

Would be very, very, very, very nice in case of limited hardware and non-critical data (due to missing bit-rot protection)
 
May 22, 2016
596
19
18
36
That's unclear. Should I manually install PVE without using the provided ISO ?
That's exaclty what I would like to avoid, using the bare-metal installer is very, very useful.

Or can I create the raid array from a debian installer, then reboot to the PVE ISO and start the installation from that by using the previously created array?
 

tom

Proxmox Staff Member
Staff member
Aug 29, 2006
13,898
462
103
what is unclear? if you want to use mdraid, you need to install manually (not via our ISO installer) and you run a not supported config.

unsupported means, that we cannot help in case of issues. so if you have questions regarding mdraid, this forum is probably the wrong place.
there are many reasons explained in our community, why mdraid is not a supported configuration.
 

tom

Proxmox Staff Member
Staff member
Aug 29, 2006
13,898
462
103
Sure, that's way i've asked to add support for mdadm in the installer without supporting that in case of issues.
this is nonsense. (offering users to run unsupported setups in our product installer).
 
May 22, 2016
596
19
18
36
this is nonsense. (offering users to run unsupported setups in our product installer).
Point of view as you offer support only to subscribers, not to everyone.

Thus, for not subscribers, having mdadm from the installer would be ok.
If you want support, you have to buy a license and NOT use the mdadm

Many other vendors act in this way
Tech support only support some configurations, everything else is not supported

In example, Dell won't support unbranded hardware, but you can use your own hardware on a Dell server, simply you won't get support for that

The same does hp and any other vendors

If you buy a redhat support package, they Will support only defined software in a defined configuration, not all packages from all repository

If you have a Xen license, Citrix will support only their software, if you install any additional software, that won't be supported

If I buy a PVE license, you'll support me only on official PVE software, what if I add a media player ? Would you support that? I don't think so

With mdadm is the same, you don't have to support that, only adding the ability to use an mdadm raid directly from the installer, nothing more
 

LnxBil

Famous Member
Feb 21, 2015
4,439
442
103
Germany
this is nonsense. (offering users to run unsupported setups in our product installer).
This is KISS at heart and I like it.

@Alessandro 123: Two years ago, I would right jump in and back your claim, but time has passed and ZFS has superseeded in my opinion mdadm in every way.

If I recall correctly, if you'd like to run VMware on an unsupported hardware, you need to patch the installer or download an patched iso to get it to work (e.g. for a unsupported disk controller). Would this also be an option for you? Simply adding mdadm to the package list and applying a simple patch to the installer itself to add mdadm selection support (not even setting it up like you said)?
On the other hand, installing PVE 4 on top of Jessie is not that hard and can also be done via preseed, so you can built your own install isos (or even pxe) based off the official PVE installer or even directly from the official apt repositories.
 
May 22, 2016
596
19
18
36
Zfs is better, right, but in some environment is absolutely overkill and a waste of resources

I don't know VMware but in xen you can install any additional packages (on a running system, not on the installer) and still have tech support for the "supported" part of the system

Yes, I'm asking of something simple: the ability to create an mdadm array from the installer, nothing more. Just for ZFS: choose the level, choose the disk and you are ready to go.

Any other customization will be made after the install
Currently is also impossible to open a shell from the installer (like with a standard debian is), thus would be impossible to manually create the raid array.

The only way is to manually install PVE from packages from a debian system
This is prone to error and I really really love the PVE baremetal installer, just put the CD and you are ready to go.
 
May 22, 2016
596
19
18
36
Also, keep in mind that ZFS is still not Linux native and not developed by kernel developers but it's an external project

Yes, Ubuntu kernel has native support but ZFS still remains and external project

If you don't need all features from ZFS, mdadm is a good and stable alternative and supported by all distributions, in case of disaster you'll be able to access an MD array from almost any Linux machine

Also, keep in mind that MD is probably the best software raid out there, nothing else can compete with it's stability and flexibility. From a raid point of view, the ONLY advantage for ZFS is the bitrot protection. MD is much more flexible in everything else (I'm still talking about raid, not as filesystem)

Please don't demonize MD.
If you want to demonize something, demonize the hardware raid controller

Any hw raid controller is worse than any mdadm

Why supporting hw raid that lacks tons of feature, are much more unreliable , massive vendor lock-in and not supporting mdadm ?

Mdadm lacks of bitrot protection, the same is for hw raid but hw raid are supported (even with closed source firmware)
This is really, really nonsense. Supporting a closed firmware and not the standard (defacto) software raid in Linux is diabolic :)
 
May 22, 2016
596
19
18
36
(100% of catastrophic failures with data loss that I had was always directly related to the hw raid controller and I've always used high end Enterprise controllers, I've never ever lost any single bit with mdadm, and I've checked some days ago a 9 years old server with raid in a perfect shape. None of my hw controllers lasted so long without issues, most of them catastrophic)
 

LnxBil

Famous Member
Feb 21, 2015
4,439
442
103
Germany
Also, keep in mind that MD is probably the best software raid out there, nothing else can compete with it's stability and flexibility. From a raid point of view, the ONLY advantage for ZFS is the bitrot protection.
Probably it is or was. The checksum stuff of ZFS is a BIG advantage if you like consistency.

I'd also throw in:
"Time to Recovery" (and initialisation time) is much better for ZFS, because only used data is replicated.
Everyone who has synchronised and EMPTY disk (in any non-RAID-0 configuration with mdadm or hw) will love this.

Currently is also impossible to open a shell from the installer
That is a problem, indeed. I'd also like to have at least one console for things like wiping lvm signatures or other stuff messing around with the KISS installer.

Please don't demonize MD.
That wasn't my intention. MD served me also very well, yet I have to admit that it was replaced almost everywhere by ZFS, even on "low-end" devices due to its superior features (yet mostly for the filesystem stuff, not raid stuff). I also ran a Raspberry PI 1 with ZFS for over half a year (until the SD card died, which was anticipated eventually).
 

LnxBil

Famous Member
Feb 21, 2015
4,439
442
103
Germany
Still, what about a self-created installer image? I haven't looked into that topic, but the installer is (hopefully) also OpenSource and can be adopted and tweaked to do exactly what you want.

I've created altered/tweaked/preseeded/kickstarted/autoyasted install images for all mayor linux distributions in the last 10 years, so it should not be that hard to do it with Proxmox VE.
 

alexskysilk

Well-Known Member
Oct 16, 2015
601
62
48
Chatsworth, CA
www.skysilk.com
Point of view as you offer support only to subscribers, not to everyone.

Thus, for not subscribers, having mdadm from the installer would be ok.
Of course it would be ok; the devs arent telling you not to do it, they're telling you they dont wish to support it. They have reasons for this- enough that they addressed that very request on their wiki. If it is a sufficiently important to you to have this baked into the installer, you can certainly make your own and offer it back to the community.

Open source doesnt mean governed by whim.
 

tom

Proxmox Staff Member
Staff member
Aug 29, 2006
13,898
462
103
Zfs is better, right, but in some environment is absolutely overkill and a waste of resources

I don't know VMware but in xen you can install any additional packages (on a running system, not on the installer) and still have tech support for the "supported" part of the system
...
Also here. Just all mdraid related issues are not supported.
 
May 22, 2016
596
19
18
36
Only saying that "mdadm" is not supported is not useful.

PVE officially supports hardware raid but reject to add "minimal-support" to mdadm. IMHO this is totally non-sense, in 2017.

Let's compare both systems:

HW-RAID:
pros:
-none
cons:
- huge vendor lock-in
- SPOF with potential dataloss
- you *must* use the same hw controller in case of failure
- most of the time you *must* use the same controller firmare in case of card replacement
- most of the time, the broken controller is unsupported on newer system and thus you can't safely move your disks in case of failure
- you ere forced to use a closed-source, badly developed firmware
- if your controller start to write garbage (it happens), you'll end up with a messed raid with data loss
- you are using a very dump SoC that add additional hw components to the server, huge thermal increase (our card is rated ad 87°C)
- a recent firmware bug in LSI controllers caused data loss. In case of resync, the controller synced the *NEW* disks (full of zeros) to the *OLD* disks, destroying existing data, and not viceversa (old disk to the new disk). (yes, it's true, there is a DELL advisor about this)

MDADM:
cons:
- none
pros:
- 100% open source and developed by linux kernel's developer, mainstrem
- heavy tested
- uncomparable flexibility
- no vendor lockin
- 100% supported by *ANY* Linux distribution/version in *ANY* configuration (from many years ago)
- in case of catastrophic failure, you only have to move disks (in any order) to any other server
- you can create "strange" configuration that sometime could save your day
- no SPOF (only the server is the SPOF, but it's the same for any other hardware component)
- performance comparable to hw raid, in 2017
- ability to use an SSD as writeback cache (something ZIL on ZFS, in recent mdadm versions)
- no need to use additional SoC or hardware. Less hardware means less failures
- Zero data corruption caused by mdadm. mdadm is heavy developed and used worldwide, critical bugs are detected and fixed quickly. I'm following the linux-raid ML from some years, i've NEVER seen any single corruption caused by mdadm itself.
- you don't need ECC like with ZFS. This allow usage also in non-enterprise hardware (ZFS without ECC is unsafe)
- you can grow or shrink by one disks without loosing redundancy during the whole resync phase
- you can reshape any raid level to any raid level without loosing redundancy during the whole reshape phase
- you can't replace a disks (if you have a spare slot available) without loosing redundancy (in other words, you don't have to remove a disks, causeing a degraded array during the whole phase. Non all HW controllers are able to do this)


So, if you are supporting hw raid, please don't tell me that mdadm is not supported because "it's obsolete" and doesn't have any bit-rot protection. Even the hw raids are obsolete and doesn't have bit-rot protection. If bit-rot is the main cause to not supporting mdadm, please also drop support to hw raid. We are in 2017, current servers are thousands time faster than a stupid SoC used in raid controllers. The performance bottleneck is a very very very very very old issue.

I'm not asking for official tech-support, only the ability to install PVE on mdadm+lvm directly from the installer, that's currently impossible, there isn't any console or mdadm feature enabled.

You are free to not provide paid support (or support in the forum) to questions regarding mdadm, but please add it in the installer.

How can you provide official support to HW RAID ? There are tens of HW RAID cards out there, most of them with different firmwares and different bugs. If you support HW RAID, you also have to support them. That's non-sense. mdadm is "one", there is only one "mdadm" software and only one version (the one provided with PVE kernel), thus is much, much easier to understand.
 

tom

Proxmox Staff Member
Staff member
Aug 29, 2006
13,898
462
103
Please expose at least one drawback of mdadm that is not present in any hw card.
Sure. Almost all mdraid users are using the harddrive write cache. If you have a power-failure, this cache is lost and if you are unlucky, you loose important data. (unless you have enterprise class SSD with power loss protection.)

You will not get any information from mdraid about this data loss and you think mdraid is cool, but in reality you loose data.
Of course, you can disable this cache, but then your performance is totally gone.

There are the reasons why there are raid controllers have cache write protection (BBU) and enterprise class SSDs got power loss protection. If all would follow your postings, all this would be non-sense ...
 
May 22, 2016
596
19
18
36
Sure. Almost all mdraid users are using the harddrive write cache. If you have a power-failure, this cache is lost and if you are unlucky, you loose important data. (unless you have enterprise class SSD with power loss protection.)
Even with ZFS if you don't use ZIL stored on SSD with power loss protection.
Even with low-end HW RAID (that you are supporting) and forcing writeback cache (like most users does because without that, it's a pain)

Anyway, this issue is solved in kernel 4.4 by using a journal (feature made by facebook AFAIK)

You will not get any information from mdraid about this data loss and you think mdraid is cool, but in reality you loose data.
Of course, you can disable this cache, but then your performance is totally gone.
This is the same for any hw raid that you are supporting. In addition, any hw raid will put your data at risks much more than mdadm, because you are relying to a closed source firmware, where bugs are discovered only after failures and so on.

There are the reasons why there are raid controllers have cache write protection (BBU) and enterprise class SSDs got power loss protection. If all would follow your postings, all this would be non-sense ...
That's false.
If you use cheap hardware, is not an mdadm failure, but your failure with bad hw planning.

What you have described also applies to hw raid, but hw raid is supported and mdadm not. This is non-sense.
hw raid expose data to much more risks than mdadm. What if hw card fails? what if you have to move disks to a different server with dropped support to that hw card (like all DELL/HP does with every generation change) ?

I have many many server with PERC H700 that is unsupported, untested, unusable on any subsequent hardware. As DELL tends to change the platform every 2 years, evety 2 years you'll hit this issue. If you put an H700 to an R730, you won't get any support from DELL.
Also, my H700 is not compatible (due to a different and proprietary form-factor) to any recent DELL server. Same vendor but incompatible hardware.

Forcing users to use hw raid expose to more risk than using mdadm.

And, anyway, users should be free to make their own decisions. But you can't say that with mdadm you'll loose data and with hw raid not.
I'm working with servers and PC from about 2001. I've never ever lost a single bit on server with mdadm (even after unclean shutdown, that are very rare, by using UPSs). On the other half, i've lost about 2 PB of data (in multiple servers, multiple times) due to some issues with hw raid (bad chip, bad firmware, card broken, card that started to write garbage, depleted battery not detected by the raid card and thus with write-back still enabled, unclean shutdown that totally destroyed the raid configuration, unclean shutdown that destroyed some data even with a new battery (less than 3 moths) ) and so on.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE and Proxmox Mail Gateway. We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get your own in 60 seconds.

Buy now!