Server Design for Proxmox w/ ZFS

Discussion in 'Proxmox VE: Installation and configuration' started by Steve88, Apr 12, 2019.

  1. Steve88

    Steve88 New Member

    Joined:
    Apr 12, 2019
    Messages:
    4
    Likes Received:
    0
    Hi,

    I am currently configuring a server with the goal of running Proxmox for several Windows 10 VMs and a Linux Server. The total amount of disk space should be approx. 4 TB.

    My current favorite design would be Raid-Z2 with 6 x 1TB SAS HDD for the VMs. Now I am struggeling a bit with ZFS caching and Proxmox installation. My initial idea was to install proxmox on a DOM and use 1 SSD for L2ARC and 2 SSDs in Raid1 for ZIL. Then I heard that it is probably a bad idea to install Proxmox on a DOM for endurance reasons. Would it be possible to install it on the mirrored SSDs in a separate partition (on top of Debian in this case) and would there be performance implications? Would a separate SSD be the better solution? Or is there any other good option I am not considering right now?

    Many thanks in advance!
    Cheers
     
    #1 Steve88, Apr 12, 2019
    Last edited: Apr 12, 2019
  2. Romsch

    Romsch Member
    Proxmox Subscriber

    Joined:
    Feb 14, 2019
    Messages:
    58
    Likes Received:
    2
    Hi. you really mean 4 TB RAM?
    What do you thinking about CEPH in a three node cluster? Thats perfect and it is a fast high availibility solution for all VMs in the cluster.
    Best regards,
    roman
     
  3. Steve88

    Steve88 New Member

    Joined:
    Apr 12, 2019
    Messages:
    4
    Likes Received:
    0
    Hi Roman, no, sorry I meant disk space (just corrected my post). The amount of available RAM should be 128 GB in total.

    Actually I have little clue about CEPH. But as you suggested it, I will take a look into it.

    Anyway, are there also any suggestion about a good design for ZFS usage?
     
  4. Romsch

    Romsch Member
    Proxmox Subscriber

    Joined:
    Feb 14, 2019
    Messages:
    58
    Likes Received:
    2
    Hi Steve,
    it depends on the requirements to be given.
    Should ceph come into question as storage solution, there must be at least three nodes (servers).
    Right now in the fresh version 5.4 it is possible to completely configure ceph via the interface. ceph has almost only advantages, here should also no RAID for the ceph hard drives disks - except with the operating system Proxmox.
    If you "only" have one server for proxmox, ZFS with SSD-Disk works very fast on a local node.

    Best regards, roman
     
  5. Steve88

    Steve88 New Member

    Joined:
    Apr 12, 2019
    Messages:
    4
    Likes Received:
    0
    Many thanks for the information! Then I guess I will stay with ZFS as I will only have this one server.

    So for ZFS, what do you think about reserving 32 GB ARC and a 128 GB mirrored ZIL as many sync writes are expected. Proxmox on a separate SSD. Would an L2ARC make sense in this config?
     
  6. alexskysilk

    alexskysilk Active Member

    Joined:
    Oct 16, 2015
    Messages:
    556
    Likes Received:
    58
    you dont need that much ZIL; consider underprovisioning your SSDs so at least they live longer. Have a look here: https://www.ixsystems.com/blog/o-slog-not-slog-best-configure-zfs-intent-log/

    With current versions of zfs, slog failure is not fatal; you may consider not bothering with the mirror depending on your tolerances.
    Not usually, although some use cases can see benefit depending on amount of RAM in the system. In a nutshell, when using l2arc A large fraction of the l2arc size is required in RAM for metadata so the end result could be slower (or not meaningfully faster) if you just left ARC in RAM. Search the interwebs for benchmarks most resembling your load type, but generically I'd say dont bother.
     
    HE_Cole likes this.
  7. Steve88

    Steve88 New Member

    Joined:
    Apr 12, 2019
    Messages:
    4
    Likes Received:
    0
    Many thanks for the link and the information! I guess I'll go for a little SSD for ZIL and some additional RAM for ARC instead of L2ARC.

    Thank you so much for your help! :)
     
  8. LnxBil

    LnxBil Well-Known Member

    Joined:
    Feb 21, 2015
    Messages:
    3,696
    Likes Received:
    331
    Why don't you just install it on your main ZFS pool? Those few GB don't need extra hardware.
     
  1. This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
    By continuing to use this site, you are consenting to our use of cookies.
    Dismiss Notice