Homelab, software RAID with SSD-cache?

Discussion in 'Proxmox VE: Installation and configuration' started by w00t, Sep 29, 2013.

  1. w00t

    w00t New Member

    Joined:
    Sep 23, 2013
    Messages:
    3
    Likes Received:
    0
    Hi,

    As the title says, I'm planning to use Proxmox in my homelab and wondering if anyone has done something like this before. Been using Proxmox alot but my current setup is a ESXi-box on i5-3470 with VT-d, running passthrough to a FreeNAS VM with ZFS (mirrored+SSD cache) and back to the ESXi host via iSCSI. Since ZFS is kinda RAM-demanding and I max the Gbit-switch I would like to move away from this setup. NIC-teaming and a new switch might be the solution if Proxmox can't handle it.

    Scenario; Proxmox with 2x mechanical drives and one SSD.

    Is it possible to setup local storage, software RAID1/0 with SSD cache to store the VM's? I'm quite novice when it comes to linux server administration as a whole. An answer like "just go with mdadm and flashcache and then move the vmstorage" won't create any magic on my side, but of course a perfectly accepted answer. I'm happy to learn but probably need som help down the road.

    Please share your knowledge and start the discussion :)!
     
    #1 w00t, Sep 29, 2013
    Last edited: Sep 29, 2013
  2. gkovacs

    gkovacs Active Member

    Joined:
    Dec 22, 2008
    Messages:
    501
    Likes Received:
    43
    Software RAID
    We had been using Proxmox VE over mdadm RAID10 couple of years back. The way to a config like this is to use the Debian ISO installer (matched to your desired Proxmox version), create the RAID array, and LVM volumes over it, install the Debian minimal system, then install the Proxmox packages at the end. I would not have recommended it for production purposes, as the PVE 1.x/2.x kernels were kind of unstable, and we had lost entire arrays several times due to some kernel panic... but now I guess it's better, the current kernel can be called stable.

    Software SSD caching
    Regarding flashcache/bcache: I have been a big supporter of adding SSD block caching to Proxmox, but so far the OpenVZ kernels don't include the necessary patches, and the Proxmox team did not deem this important enough to allocate developer time to it. According to several sources the next RHEL7 kernel (which OpenVZ kernels are routinely based on) will include bcache, and after it's released (probably during the next 6 months) it will eventually become the base for the Proxmox kernel, so hopefully we can have software based SSD caching in 2014.
    http://forum.proxmox.com/threads/14023-Flashcache-on-Proxmox-3-x

    Right now, you can probably compile a Proxmox kernel with flashcache included
    http://florianjensen.com/2013/01/02/adding-flashcache-to-proxmox-and-lvm/
    http://nomaddeleeuw.nl/blog/1-ict/6-flashcache
    but it's performance is much worse than bcache, so probably not worth the effort (though more testing wouldn't hurt):
    http://www.accelcloud.com/2012/04/18/linux-flashcache-and-bcache-performance-testing/

    Also you can have SSD caching on ZFS, where you can put L2ARC and ZIL to an SSD, improving performance tremendously:
    http://serverfault.com/questions/43...-have-one-large-ssd-for-both-or-two-smaller-s
     
    #2 gkovacs, Sep 30, 2013
    Last edited: Oct 3, 2013
  3. FastLaneJB

    FastLaneJB Member

    Joined:
    Feb 3, 2012
    Messages:
    79
    Likes Received:
    6
  4. p3x-749

    p3x-749 Member

    Joined:
    Jan 19, 2010
    Messages:
    103
    Likes Received:
    0

    +1 for this option.
    As you already know your way with ZFS and tuning it for VM storage, this sound appropriate, doesn't it?
    I'd start with the Debian Wheezy install of Proxmox and - before adding PVE - install ZFS-on-Linux and apply that with LVM to the storage array as required by PVE.
     
  5. FastLaneJB

    FastLaneJB Member

    Joined:
    Feb 3, 2012
    Messages:
    79
    Likes Received:
    6
    Isn't the issue with ZFS that without heavy tuning it'll gobble up all the RAM you want to use for your VM's and without quite a bit of RAM performance drops heavily?
     
  6. p3x-749

    p3x-749 Member

    Joined:
    Jan 19, 2010
    Messages:
    103
    Likes Received:
    0
    AFAIK it will use the RAM as cache...all RAM there is, but I've never seen it causing the system to swap...also I've never seen ZFS preventing another application from sfarting with a "not enough memory" issue.
    Well, with ZFS the only thing that goes better than RAM with ZFS is ...even more RAM :D ...but If you are not using dedup feature, a system with 8-16GB of RAM for ZFS will perform very well.
    What you should employ with ZFS is ECC memory.
     
  7. FastLaneJB

    FastLaneJB Member

    Joined:
    Feb 3, 2012
    Messages:
    79
    Likes Received:
    6
    I'm sure I saw a thread that suggested this wasn't the case with Proxmox and ZFS. It causes pressure on RAM so guests get their balloons inflated and ZFS takes that RAM as well. So they had to tweak ZFS to limit the amount of RAM it gobbles up.

    Also if I remember right some of the disk write back policies will no longer work. Oh and containers have issues running off it.

    However I agree ZFS is great but it's still not as simple as a slot in as Flashcache. Just depends if they've fixed the performance in their latest release.
     
  8. gkovacs

    gkovacs Active Member

    Joined:
    Dec 22, 2008
    Messages:
    501
    Likes Received:
    43
    I am currently testing the flashcache module with the latest Proxmox kernel, will work on to create meaningful benchmarks like this:
    http://www.accelcloud.com/2012/04/18/linux-flashcache-and-bcache-performance-testing/

    It will take some time though, I'm still in the process of building the test server and planning out the benchmark, as I will try to include tests that represent the workload that various virtual machines create. My testing ideas (beyond the basic IOZone, Bonnie++, etc.) include MySQL and other DB benchmarks like Facebook's Linkbench / Dbench, and also running tests while vzdump is running in the background. Ideas welcome.

    If the new version of flashcache proves useful we will start using it in production, because waiting for an RHEL7-based OpenVZ kernel is going to be the favorite pastime of 2014, quite possibly 2015 as well.
     
    #8 gkovacs, Oct 14, 2013
    Last edited: Oct 14, 2013
  9. FastLaneJB

    FastLaneJB Member

    Joined:
    Feb 3, 2012
    Messages:
    79
    Likes Received:
    6
    That's great. I look forward to seeing your performance results. I've got a server with HDD / SSD however the SSD is in use so I'd need to migrate everything off it first.
     
  10. p3x-749

    p3x-749 Member

    Joined:
    Jan 19, 2010
    Messages:
    103
    Likes Received:
    0
    ...cannot comment on this, as I am not using PVE+ZFS in a production environment and/or with heavy load.

    You might have a good point for VZ Containers and ZFS not being supported.

    I also like ZFS and found it very reliable, even on its Linux implementation.
    But i can follow your arguments regarding the complexity of the integration directly on the PVE host.
    Maybe running a ZFS storage/filer with iSCSI is another option..you could even run it in a kvm and passthrough the controller/HBA with vt-d passthrough....used this quite a while with an ESXi based build with good success.
     
  1. This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
    By continuing to use this site, you are consenting to our use of cookies.
    Dismiss Notice