SSD configuration for new Proxmox install?

Discussion in 'Proxmox VE: Installation and configuration' started by rmflint, May 12, 2019.

  1. rmflint

    rmflint New Member

    Joined:
    Apr 12, 2018
    Messages:
    19
    Likes Received:
    0
    I am trying to plan my hardware configuration and zfs setup on a Dell R620 which has only 8 sas drive bays. The main goal is to maximize bulk storage while still maintaining an efficient VM server. I have plenty of ram and plan to max out as my budget permits. This is a home lab environment and intended to be a learning environment for zfs and virtualization.

    My initial design would be the following:

    - Host os installed on small Intel SSD (50 GB) with SATA 2 interface
    - VM storage on separate intel 800 GB PCIe 3 SSD
    - bulk data storage on 8 sas HDDs (probably raid 10)

    I am hoping for advice on how to best utilize my SSD’s and HDD’s ?

    For example, host SSD in raid 0 (cloned to backup drive for recovery), VM SSD in raid 0 backed up via normal zfs snapshots and etc.

    Note: I will also be building a separate FreeNAS backup server.
     
  2. 6uellerbpanda

    6uellerbpanda Member
    Proxmox Subscriber

    Joined:
    Sep 15, 2015
    Messages:
    46
    Likes Received:
    5
    It basically depends on your workload.

    For your VM storage it will be probably random io = mirrored vdevs

    For your bulk storage
    if sequential io - raidz1... If a mix of seq and random io you can choose whatever you think your expected workload will be.
     
  3. rmflint

    rmflint New Member

    Joined:
    Apr 12, 2018
    Messages:
    19
    Likes Received:
    0
    The main workload will be for streaming video (probably Plex server). I would also like to experiment with various flavors of Linux, Windows and different server types. I would also like to install MySQL since I am a database developer.

    I really don’t want to purchase a bigger and slower SSD to mirror the 800 GB PCIe SSD. I am limited on drive options which is why I was planning to install the 50 GB SATA SSD into the optical drive bay for only the Proxmox host OS.

    If I install a second 800 GB SSD into one of the eight hot swap bays I would lose one of the disk for bulk storage. I don’t have any extra pci slots available for an additional pci SSD. I see another reason not to do this is limiting the pci interface speed if the mirrored SSD is going through the backplane and older HBA (h710 in IT-mode)?

    Is raid 0 a bad choice for both the host and VM SSDs? I figured using zfs on both a better option than ext4 in order to benefit from zfs features.

    I don’t see restoring the VMs as too painful if the disk dies since I will be backing it up. If the host os SSD dies I figured I would just swap it out with the clone. This is a home lab environment so restoring failed disks should be a good learning experience!

    Is Proxmox and separate VMs on individual raid 0 disks a bad idea?
     
    #3 rmflint, May 13, 2019
    Last edited: May 13, 2019
  4. LnxBil

    LnxBil Well-Known Member

    Joined:
    Feb 21, 2015
    Messages:
    3,804
    Likes Received:
    348
    The painful part is not having a working machine until the drive is replaced (hardware and software wise). With RAID1, you can work as if nothing happened, with RAID0 you have nothing.
     
  5. rmflint

    rmflint New Member

    Joined:
    Apr 12, 2018
    Messages:
    19
    Likes Received:
    0
    Would 2 dell 500 GB constellation SAS drives mirrored be sufficient for the os and VM storage?

    This would leave me with 6 drives for bulk storage.

    Any suggestions on what to do with the single sata2 bay and my 80 GB intel dc S3500 SSD? Maybe a l2arc or slog drive?
     
  6. LnxBil

    LnxBil Well-Known Member

    Joined:
    Feb 21, 2015
    Messages:
    3,804
    Likes Received:
    348
    What is 'bulk storage' if it is not used for VMs (mirrored SAS)?

    slog could work according to this list: http://www.sebastien-han.fr/blog/20...-if-your-ssd-is-suitable-as-a-journal-device/
     
  7. rmflint

    rmflint New Member

    Joined:
    Apr 12, 2018
    Messages:
    19
    Likes Received:
    0
    I figured it would be better to store my os and vms specific data on a two way mirror of smaller sas drives so that rebuilds, restores and performance of the array would be faster. Eventually I would like to replace the sas hard drives with SSD drives.

    I have 6 x 6TB SATA hard drives for the remaining bays which I was referring to as bulk storage for large video files.

    After doing more research I plan to use the single SATA 2 bay and the 80 GB SSD as a slog device for the 6 x 6TB pool.

    Does running the os and vms on a 2x500GB hdd mirror make sense?

    Would it make a significant difference if I just bit the bullet and purchased 2 s3700 SATA SSDs for the os and vms?
     
  8. LnxBil

    LnxBil Well-Known Member

    Joined:
    Feb 21, 2015
    Messages:
    3,804
    Likes Received:
    348
    Makes sense with respect to what? It's still not enough information. Let's see an example for a few video users:
    RAIDed (>0) storage for video files makes only sense if you want to deliver video files faster than the speed of a single disk. On single disk saturates a 1 GBE connection, so unless you're planing 10 GBE, it does not make sense to use a raided disk setup for delivering video files. VM data on the other hand does read and write randomly and the performance greatly improves with faster disks or just more spindles.

    Therefore, back to a suggestion:
    If this was my system and limitations: I suggest to run a 8x mirror with OS, VMs and bulk data on the same storage to maximise performance in a system where you have a limited and small number of disks. You can then also use the SSD for slog if you like.

    For the actual ZFS layout, I suggest a RAID10-like setup with stripped mirrors for best performance or a RAID50-like stripped RAIDz1 (or if you need RAIDz2).
     
  1. This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
    By continuing to use this site, you are consenting to our use of cookies.
    Dismiss Notice