Thoughts please: pull our the RAID controller and use onboard SATA for ZFS?

Discussion in 'Proxmox VE: Installation and configuration' started by GuiltyNL, Jul 13, 2018.

Tags:
  1. GuiltyNL

    GuiltyNL Member
    Proxmox VE Subscriber

    Joined:
    Jun 28, 2018
    Messages:
    33
    Likes Received:
    3
    Hi guys,

    We are running Intel servers with server boards (Intel S2600WTTR) that have a RAID plugin card (Intel RMS3HC080). Currently running in HW RAID 10.

    On top of that we are currently running ZFS in RAID0 ('one disk'). However, I suspect some locking issues and I want to reinstall one of our servers to test.

    I can do two thing:

    1) Convert the disks to JBOD and offer them to ZFS, but I'm not sure if the Intel (LSI) controller passes all the information to ZFS
    2) Ditch the RAID plugin card and attach the disks directly to the onboard SATA controller (which is offering more than enough ports (10, we have 8 disks).

    The latter will give the most pure form of the disks to ZFS, but it will remove the off-loading capacity of the RAID plugin card.

    Any thoughts?
     
  2. WhiteStarEOF

    WhiteStarEOF Member

    Joined:
    Mar 6, 2012
    Messages:
    82
    Likes Received:
    8
    I've used a PERC 6 series in the past to present multiple single disk arrays to Proxmox, and then use ZFS to bring those disks together. The performance was beyond abysmal regardless of the zpool configuration.

    I'd say go with option 2. RAID controllers have nothing to offer except headaches.
     
    GuiltyNL likes this.
  3. alexskysilk

    alexskysilk Active Member
    Proxmox VE Subscriber

    Joined:
    Oct 16, 2015
    Messages:
    425
    Likes Received:
    47
    It all depends on your needs. there is a reason that both these options exist and its not because one is always better then the other.

    Why zfs? ZFS provides file system and lvm integration, which means parity is available on the file level and not just at the block. When data integrity is paramount there is no substitute. It also has integrated snapshots, native compression, and with sufficient (read: a LOT) ram it can deduplicate.

    why not zfs? ZFS is RAM HUNGRY. Without sufficient ram it is a dog. It also sufferst from un-repairable fragmentation, which means that as the system gets full it gets slower and the only remedy is to remove the data and rebuild the dataset.

    why RAID? RAID is fast and easy; in cases where CoW features are not beneficial its faster, lighter on system resources, and coupled with LVM still provides many of the same benefits.

    why not RAID? It does NOT provide the same level of fault detection and mitigation as zfs does; RAID is unaware of UREs (unrecoverable read errors) or bitrot, nor is it aware of file system corruption. It also doesnt have all the other benefits of zfs such as snapshots, send/receive, compression, etc.

    I have various deployments with either RAID/LVM or ZFS as best suits the application, and they all work adequately.
     
  4. GuiltyNL

    GuiltyNL Member
    Proxmox VE Subscriber

    Joined:
    Jun 28, 2018
    Messages:
    33
    Likes Received:
    3
    Thanks for pointing that out, but that was not what I asked. :D
     
  5. alexskysilk

    alexskysilk Active Member
    Proxmox VE Subscriber

    Joined:
    Oct 16, 2015
    Messages:
    425
    Likes Received:
    47
    I guess when you asked for thoughts, I misunderstood. good luck.
     
  6. GuiltyNL

    GuiltyNL Member
    Proxmox VE Subscriber

    Joined:
    Jun 28, 2018
    Messages:
    33
    Likes Received:
    3
    Yes thoughts on using ZFS via RAID card with JBOD or using ZFS without the RAID card and plugging the discs directly on the SATA controller of our our server board. ;)
     
  1. This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
    By continuing to use this site, you are consenting to our use of cookies.
    Dismiss Notice