1. Alessandro 123

    Joined:
    May 22, 2016
    Messages:
    594
    Likes Received:
    19
    In addition: 9 users to 10 will not update raid firmware. "mdadm" is selfupdating, when you update the kernel, mdadm is automatically updated. And kernel updates are much more common than firmware upgrade (firmware upgrade most of the time requires a boot cd or similiar, most of the time Debian is not supported, thus you can't upgrade from PVE, you have to reboot to the upgrade cd, and so on. Or you have to use the DELL Update from the BIOS, that 9 times to 10 is not connecting properly to their FTP server)
     
  2. Ashley

    Ashley Member

    Joined:
    Jun 28, 2016
    Messages:
    267
    Likes Received:
    14

    At no point are they stopping any one using it, it might only be a few lines of code, however if Proxmox took every approach with a feature they won't support but people want it would just be a huge mess. Proxmox provide the option to install ontop of a custom Debian install.

    Installing via the packages are as close to an ISO as your going to get, and upgrades e.t.c are identical, unless they slapped a big not supported Term's and Conditions that had to be accepted every time you used Software Raid in the installer there will always be someone that uses it and then wants support, their are plenty of business out their that make it very difficult to use a custom setup, for example Cisco Software / VM's will do hard checks against the hardware it run's on, I would say of all the software I have had experience with deploying Proxmox is very open.
     
  3. Alessandro 123

    Joined:
    May 22, 2016
    Messages:
    594
    Likes Received:
    19
    t
    I agree. But motivations due to lack of support to mdadm are very feasible. they are supporting a much worse system
     
  4. Ashley

    Ashley Member

    Joined:
    Jun 28, 2016
    Messages:
    267
    Likes Received:
    14
    What is the system they support which you would say is much worse than mdadm?
     
  5. Alessandro 123

    Joined:
    May 22, 2016
    Messages:
    594
    Likes Received:
    19
    As wrote above, any hardware raid. There is almost no reason, in 2017, to stick to an hardware card for multiple reasons

    mdadm would be a good compromise, not the definitive solution, but a compromise. Every part of mdadm is better than any hardware card.
     
  6. Rhinox

    Rhinox Active Member

    Joined:
    Sep 28, 2016
    Messages:
    272
    Likes Received:
    35
    You can't mean this seriously...
     
  7. Alessandro 123

    Joined:
    May 22, 2016
    Messages:
    594
    Likes Received:
    19
    yes totally serious. I'm replacing all hw raid with ZFS or mdadm in every system that i manage, i'll never go back. (obviously, you have to use a good HBA)
     
  8. Ashley

    Ashley Member

    Joined:
    Jun 28, 2016
    Messages:
    267
    Likes Received:
    14
    I think your forgetting a hardware raid card requires absolutely no support in the Proxmox installer, as the installer just manages it as if you was installing directly to one single disk, what your asking is for them to make a system available to install and configure mdadm. Which again is something they decided not to support, but give you all the power to do manually.

    Also your statement using a HBA, half of your "con's" for Hardware Raid card then would also be exactly the same for a HBA card.
     
  9. wbumiller

    wbumiller Proxmox Staff Member
    Staff Member

    Joined:
    Jun 23, 2015
    Messages:
    643
    Likes Received:
    82
    The implementation passes pointers to userspace data to the underlying storages individually when using O_DIRECT (cache=none in qemu) causing the write job for each disk to read the buffer from userspace independently allowing any unprivileged user to cause it to write different data to each disk for the same write request by simply racing some writes into the buffer in a second thread - reading it back immediately puts the raid into degraded state.

    https://bugzilla.kernel.org/show_bug.cgi?id=99171
     
    Stop hovering to collapse... Click to collapse... Hover to expand... Click to expand...
  10. Rhinox

    Rhinox Active Member

    Joined:
    Sep 28, 2016
    Messages:
    272
    Likes Received:
    35
    Really? You said "every" (sic!) part of mdadm is better than hw-raid. Ok, so let's discuss a few things:

    System cpu-load (especially during array-reconstruction): lower with mdadm, or with hw-raid?
    Or OS-support: better with mdadm, or hw-raid?
    Array reconstruction: easier on mdadm, or hw-raid?
    Boot-loader support: easier with mdadm, or hw-raid?
    Arrays: easier done with partitions (mdadm) or with the whole disk (hw-raid)?
    System cpu-load: does it impact more mdadm, or hw-raid?

    Etc, etc. I could think of many reasons when hw-raid is better (and many reasons when sw-raid is better). I have been using both hw-raid as well as mdadm/fakeraid/zfs for many years, but I do not dare to say one of them is definitely better in every aspect...
     
  11. dietmar

    dietmar Proxmox Staff Member
    Staff Member

    Joined:
    Apr 28, 2005
    Messages:
    16,456
    Likes Received:
    310
    Sure, you can make your own decisions - simple install using the Debian installer and you get mdadm support if you want it.

    As developers, we are also free to make our own decisions (based on our experience) ...
     
    Stop hovering to collapse... Click to collapse... Hover to expand... Click to expand...
  12. Alessandro 123

    Joined:
    May 22, 2016
    Messages:
    594
    Likes Received:
    19
    negligible with modern hardware. I've rebuild a 20TB RAID-6 during peak hours in our backup server and during 6 heavy, parallel, rsync backups. I'm running the same rsync every night. There was absolutetly no delay. each backup took about 9 hours during the whole rebuild, exactly the same time that they take without a rebuild.

    Today backup (with no rebuild) took 9 hours and 11 minutes.

    What about ZFS ? It's the same. But ZFS is supported........

    Absolutetly better with mdadm, that's kernel native. hw-raid requires some drivers.

    Very similiar, mdadm is able to start rebuild automatically, just remove and plug the disk.

    It's the same.

    You can (you should) use the whole disks as raid member, even with mdadm

    As wrote above: negligible.

    Ok, but these "drawbacks" are the same with ZFS. But ZFS is supported. PVE already support a software raid , the one inside ZFS.
    But ZFS in small hardware is not suggested (it's an enterprise filesystem developed for high-end hardware). Using ZFS on low-end hardware, without ECC, without tons of memory, without a ZIL (used as writeback) is not suggested. In these cases, mdadm is far better as you'll loose all the cool things (like bit-rot protection) coming from ZFS. You can't have bit-rot protection without ECC memory. You don't have any "writeback" cache or "power-loss" protection without ZIL on SSD.

    When all of this is missing, mdadm is better. All of this is alwyas missing with low-end devices and in this cases your are almost forced to use hwraid
     
  13. Rhinox

    Rhinox Active Member

    Joined:
    Sep 28, 2016
    Messages:
    272
    Likes Received:
    35
    Yes, for raid1/0/10. But did you ever try reconstruction of raid5/6 array with ~20 drives?

    AFAIK, mdadm is supported only by linux. HW-raid is supported by any major OS. In every case, you need driver (either as part of linux-kernel sources, OS-installation disk, or as extra sw-package).

    Only if you have enough cpu-power. Try to reconstruct array on heavily loaded db-server with no core reserved for mdadm. And look what it does to your i/o...

    No, and you know it very well. For lilo/grub to work with mdadm, you need special configuration. And you need some tweak in bios (to be able to boot from 2nd disk if the 1st fails). For hw-raid array it is like a normal single disk. No special precaution is necessary.

    Come on, you can not compare mdadm with zfs! Yes, there are some drawbacks common for mdadm/zfs, but in case of zfs, it is more than generously compensated by benefits mdadm can not provide. *IF* I'm using software-raid these days, it's zfs then. But there are still cases when I prefer good hw-raid...
     
  14. Alessandro 123

    Joined:
    May 22, 2016
    Messages:
    594
    Likes Received:
    19
    20 drives not, but 12 yes. 12x2TB RAID-6 as wrote above.
     
  15. Rhinox

    Rhinox Active Member

    Joined:
    Sep 28, 2016
    Messages:
    272
    Likes Received:
    35
    Did it generage "negligible" load? I doubt. Calculating parity with that much drives takes some cpu-time. And what's even worse, if you do not have cpu/core reserved exclusively for mdadm, other apps will be competing for cpu-time and slowing down i/o. Not problem for lightly/moderately loaded server with plenty of cpu-time to spare, but big problem for production server with heavy load...
     
  16. Alessandro 123

    Joined:
    May 22, 2016
    Messages:
    594
    Likes Received:
    19
    ZFS doesn't have to recalculate parity?
    It's the same, but ZFS has also to calculate the checksum, thus is heavier than mdadm in this regard.

    It's the same. Not everyone are interested (or can safely use) in ZFS features, thus mdadm could be a lighter alternative
     
  17. tom

    tom Proxmox Staff Member
    Staff Member

    Joined:
    Aug 29, 2006
    Messages:
    13,523
    Likes Received:
    402
    Again and this is my last post here in this thread: please use mdraid if it fits into your use case, as its easy to configure for you, just do it.

    But accept that we do not add it our installer as it does not fit for us for obvious reasons.
     
    Stop hovering to collapse... Click to collapse... Hover to expand... Click to expand...
  18. alexskysilk

    alexskysilk Active Member

    Joined:
    Oct 16, 2015
    Messages:
    559
    Likes Received:
    59
    wow.

    just wow.

    Its one thing to stubbornly continue to repeat the same mantra, regardless of its technical or any other merit; its another to loudly insist that others accept your point of view. I accept that you consider yourself an authority on storage, but unless you can convince others (more specifically, the devs of this particular software package who have proven their qualifications by providing you a product you are using and not the other way around) of your bone fides and why they are superior I see no reason to pay any attention to you.

    I willl summarize your argument thusly: I like mdadm and so you (Proxmox devs) must support it. the devs (in a great show of patience) explained that you are free to deploy it if you must but mdadm is not production quality software, and therefore not within the scope of the supported product.

    Why is any further discussion even necessary?
     
  19. Rhinox

    Rhinox Active Member

    Joined:
    Sep 28, 2016
    Messages:
    272
    Likes Received:
    35
    I did not compare mdadm vs zfs, but mdadm vs hw-raid, because your statement was:
    That's simply not true. Both mdadm and hw-raid (and zfs-raid) have advantages and disadvantages. Period.
     
  20. Alessandro 123

    Joined:
    May 22, 2016
    Messages:
    594
    Likes Received:
    19
  1. This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
    By continuing to use this site, you are consenting to our use of cookies.
    Dismiss Notice