Homelab, software RAID with SSD-cache?

w00t

New Member
Sep 23, 2013
3
0
1
Hi,

As the title says, I'm planning to use Proxmox in my homelab and wondering if anyone has done something like this before. Been using Proxmox alot but my current setup is a ESXi-box on i5-3470 with VT-d, running passthrough to a FreeNAS VM with ZFS (mirrored+SSD cache) and back to the ESXi host via iSCSI. Since ZFS is kinda RAM-demanding and I max the Gbit-switch I would like to move away from this setup. NIC-teaming and a new switch might be the solution if Proxmox can't handle it.

Scenario; Proxmox with 2x mechanical drives and one SSD.

Is it possible to setup local storage, software RAID1/0 with SSD cache to store the VM's? I'm quite novice when it comes to linux server administration as a whole. An answer like "just go with mdadm and flashcache and then move the vmstorage" won't create any magic on my side, but of course a perfectly accepted answer. I'm happy to learn but probably need som help down the road.

Please share your knowledge and start the discussion :)!
 
Last edited:
Hi,

As the title says, I'm planning to use Proxmox in my homelab and wondering if anyone has done something like this before. Been using Proxmox alot but my current setup is a ESXi-box on i5-3470 with VT-d, running passthrough to a FreeNAS VM with ZFS (mirrored+SSD cache) and back to the ESXi host via iSCSI. Since ZFS is kinda RAM-demanding and I max the Gbit-switch I would like to move away from this setup. NIC-teaming and a new switch might be the solution if Proxmox can't handle it.

Scenario; Proxmox with 2x mechanical drives and one SSD.

Is it possible to setup local storage, software RAID1/0 with SSD cache to store the VM's? I'm quite novice when it comes to linux server administration as a whole. An answer like "just go with mdadm and flashcache and then move the vmstorage" won't create any magic on my side, but of course a perfectly accepted answer. I'm happy to learn but probably need som help down the road.

Please share your knowledge and start the discussion :)!

Software RAID
We had been using Proxmox VE over mdadm RAID10 couple of years back. The way to a config like this is to use the Debian ISO installer (matched to your desired Proxmox version), create the RAID array, and LVM volumes over it, install the Debian minimal system, then install the Proxmox packages at the end. I would not have recommended it for production purposes, as the PVE 1.x/2.x kernels were kind of unstable, and we had lost entire arrays several times due to some kernel panic... but now I guess it's better, the current kernel can be called stable.

Software SSD caching
Regarding flashcache/bcache: I have been a big supporter of adding SSD block caching to Proxmox, but so far the OpenVZ kernels don't include the necessary patches, and the Proxmox team did not deem this important enough to allocate developer time to it. According to several sources the next RHEL7 kernel (which OpenVZ kernels are routinely based on) will include bcache, and after it's released (probably during the next 6 months) it will eventually become the base for the Proxmox kernel, so hopefully we can have software based SSD caching in 2014.
http://forum.proxmox.com/threads/14023-Flashcache-on-Proxmox-3-x

Right now, you can probably compile a Proxmox kernel with flashcache included
http://florianjensen.com/2013/01/02/adding-flashcache-to-proxmox-and-lvm/
http://nomaddeleeuw.nl/blog/1-ict/6-flashcache
but it's performance is much worse than bcache, so probably not worth the effort (though more testing wouldn't hurt):
http://www.accelcloud.com/2012/04/18/linux-flashcache-and-bcache-performance-testing/

Also you can have SSD caching on ZFS, where you can put L2ARC and ZIL to an SSD, improving performance tremendously:
http://serverfault.com/questions/43...-have-one-large-ssd-for-both-or-two-smaller-s
 
Last edited:
Also you can have SSD caching on ZFS, where you can put L2ARC and ZIL to an SSD, improving performance tremendously:
http://serverfault.com/questions/43...-have-one-large-ssd-for-both-or-two-smaller-s


+1 for this option.
As you already know your way with ZFS and tuning it for VM storage, this sound appropriate, doesn't it?
I'd start with the Debian Wheezy install of Proxmox and - before adding PVE - install ZFS-on-Linux and apply that with LVM to the storage array as required by PVE.
 
+1 for this option.
As you already know your way with ZFS and tuning it for VM storage, this sound appropriate, doesn't it?
I'd start with the Debian Wheezy install of Proxmox and - before adding PVE - install ZFS-on-Linux and apply that with LVM to the storage array as required by PVE.

Isn't the issue with ZFS that without heavy tuning it'll gobble up all the RAM you want to use for your VM's and without quite a bit of RAM performance drops heavily?
 
Isn't the issue with ZFS that without heavy tuning it'll gobble up all the RAM you want to use for your VM's and without quite a bit of RAM performance drops heavily?

AFAIK it will use the RAM as cache...all RAM there is, but I've never seen it causing the system to swap...also I've never seen ZFS preventing another application from sfarting with a "not enough memory" issue.
Well, with ZFS the only thing that goes better than RAM with ZFS is ...even more RAM :D ...but If you are not using dedup feature, a system with 8-16GB of RAM for ZFS will perform very well.
What you should employ with ZFS is ECC memory.
 
AFAIK it will use the RAM as cache...all RAM there is, but I've never seen it causing the system to swap...also I've never seen ZFS preventing another application from sfarting with a "not enough memory" issue.
Well, with ZFS the only thing that goes better than RAM with ZFS is ...even more RAM :D ...but If you are not using dedup feature, a system with 8-16GB of RAM for ZFS will perform very well.
What you should employ with ZFS is ECC memory.

I'm sure I saw a thread that suggested this wasn't the case with Proxmox and ZFS. It causes pressure on RAM so guests get their balloons inflated and ZFS takes that RAM as well. So they had to tweak ZFS to limit the amount of RAM it gobbles up.

Also if I remember right some of the disk write back policies will no longer work. Oh and containers have issues running off it.

However I agree ZFS is great but it's still not as simple as a slot in as Flashcache. Just depends if they've fixed the performance in their latest release.
 
There's a new version of Flashcache out a few days ago. Apparently this gives Facebook a nice performance bump so maybe helps when added to Proxmox?

https://m.facebook.com/notes/facebo...om-2010-to-2013-and-beyond/10151725297413920/

It seems it's the only option currently until a newer kernel with bcache is available for instance.

I am currently testing the flashcache module with the latest Proxmox kernel, will work on to create meaningful benchmarks like this:
http://www.accelcloud.com/2012/04/18/linux-flashcache-and-bcache-performance-testing/

It will take some time though, I'm still in the process of building the test server and planning out the benchmark, as I will try to include tests that represent the workload that various virtual machines create. My testing ideas (beyond the basic IOZone, Bonnie++, etc.) include MySQL and other DB benchmarks like Facebook's Linkbench / Dbench, and also running tests while vzdump is running in the background. Ideas welcome.

If the new version of flashcache proves useful we will start using it in production, because waiting for an RHEL7-based OpenVZ kernel is going to be the favorite pastime of 2014, quite possibly 2015 as well.
 
Last edited:
That's great. I look forward to seeing your performance results. I've got a server with HDD / SSD however the SSD is in use so I'd need to migrate everything off it first.
 
I'm sure I saw a thread that suggested this wasn't the case with Proxmox and ZFS. It causes pressure on RAM so guests get their balloons inflated and ZFS takes that RAM as well. So they had to tweak ZFS to limit the amount of RAM it gobbles up.

Also if I remember right some of the disk write back policies will no longer work. Oh and containers have issues running off it.

However I agree ZFS is great but it's still not as simple as a slot in as Flashcache. Just depends if they've fixed the performance in their latest release.

...cannot comment on this, as I am not using PVE+ZFS in a production environment and/or with heavy load.

You might have a good point for VZ Containers and ZFS not being supported.

I also like ZFS and found it very reliable, even on its Linux implementation.
But i can follow your arguments regarding the complexity of the integration directly on the PVE host.
Maybe running a ZFS storage/filer with iSCSI is another option..you could even run it in a kvm and passthrough the controller/HBA with vt-d passthrough....used this quite a while with an ESXi based build with good success.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!