Flashcache on Proxmox 3.x

dlasher

Renowned Member
Mar 23, 2011
242
30
93
Anyone have this working yet, want to share? (Would be great to see flashcache/bcache as an install option.. seems used a lot)
 
Hi, I'm not sure it's available in current redhat kernel.

I was using this guide ( http://florianjensen.com/2013/01/02/adding-flashcache-to-proxmox-and-lvm/ ) which has it working under 2.3, older kernel.

Tonight I finally read the flashcache readme, and it talks about explicitly naming the kernel header path. Once I did that, magic!

Code:
make KERNEL_TREE=/usr/src/linux-headers-2.6.32-20-pve -f Makefile.dkms boot_conf

and

make KERNEL_TREE=/usr/src/linux-headers-2.6.32-20-pve install

rest of the doc seems to work fine. Testing this weekend as I have the chance.
 
I'm thinking about installing flashcache on a cheap server to ease the load on the disks. What performance gains are you experiencing if any at all? It is supposed to help most of the time but reading about these solutions it can even hurt performance in many scenarios.
 
I would also like to see some kind of SSD based caching implemented in Proxmox, as dozens of VM's on a single server can create huge IO problems and it seems the technology is ready and freely available.

Currently, there are three options:

- flashcache

GPL, developed by facebook, optimized for the 2.6.32 kernel

- bcache
GPL, just incorporated into the mainline 3.10 kernel

- velobit hypercache
http://www.velobit.com/products/HyperCache/
apparently supports every kind of linux kernel available, has a free "starter" edition with cache sized maxed at 32GB


I have also stumbled upon a performance test comparing bcache and flashcache, and it seems bcache is vastly superior:
http://www.accelcloud.com/2012/04/18/linux-flashcache-and-bcache-performance-testing/

I would gladly see the thoughts of the Proxmox devs on this.
 
Last edited:
..

I would gladly see the thoughts of the Proxmox devs on this.

looks interesting but if we always recommend hardware raid controller anyways.

so if you want to use SSD as cache devices on top of this, just use a controller supporting this - from Adaptec or LSI.

so far there are no plans to officially support bcache or (flashcache) in 3.x releases.
 
looks interesting but if we always recommend hardware raid controller anyways.

so if you want to use SSD as cache devices on top of this, just use a controller supporting this - from Adaptec or LSI.

We are already using hardware RAID controllers. The problem with using controller-supported SSD caching is that you have to throw out your otherwise perfectly capable and fast controllers just to buy into a software "feature", not to mention it's only available on the most expensive models.

We currently use Adaptec 6805E, 8 port RAID10: 250 USD
Caching feature would require Adaptec 6805Q: 900 USD
Multiply that by the number of servers...

so far there are no plans to officially support bcache or (flashcache) in 3.x releases.

The entire premise of Proxmox - if I understand correctly - is to create a powerful bundle of freely available yet robust technologies, like container and hardware virtualization, high availability clustering, etc. If I wanted to throw money at every problem I encounter, I could just invest in a VMware or Microsoft solution running on top of big iron.

I urge you to reconsider your position, and find a place for GPL'd SSD caching in your plans. Proxmox servers badly need all the IO performance they can get.

Maybe you should start a vote on that, to ask the community how they feel?
Probably there is a feature on the roadmap that is less important for your users.
 
Last edited:
I imagine there's a good reason why Linus put bcache into the latest 3.10 kernels ;)

Here's another vote for soft cache support without RAID controllers.
 
I imagine there's a good reason why Linus put bcache into the latest 3.10 kernels ;)

Here's another vote for soft cache support without RAID controllers.

I assume yes, there are reasons. But no one uses 3.10 kernel in productions servers but as soon as these new technologies are proven they will also appear in a future Proxmox VE kernels. There are no plans to have it in the current 2.6.32 (3.x series) and afaik it is not even available for this kernel branch anyways.
 
RHEL 7 is scheduled to be published with kernel 3.10 so maybe Proxmox 4 or 5 will have bcache support given the fact that Proxmox is based on whatever kernel is used in the latest stable RHEL. It might even be back ported by Redhat to kernel 2.6.32 in there RHEL 6 since new features is still added to RHEL 6 until medio 2014.

According to this: http://www.serverwatch.com/server-news/where-is-red-hat-enterprise-linux-7.html
it seems more likely that RHEL 7 will ship with 3.11:D
 
Last edited:
RHEL 7 is scheduled to be published with kernel 3.10 so maybe Proxmox 4 or 5 will have bcache support given the fact that Proxmox is based on whatever kernel is used in the latest stable RHEL. It might even be back ported by Redhat to kernel 2.6.32 in there RHEL 6 since new features is still added to RHEL 6 until medio 2014.

This might have been true before, but there is a very significant player you forgot about, apart from Red Hat and Proxmox: the OpenVZ project. They are the ones who build the kernels from the RHEL sources and include all the OpenVZ patches. The Proxmox project works with what they release.

Last year there were some testing/unstable status kernels newer than 2.6.32, but now there is not even a mention of them on the project's website, so I'm not too hopeful about us seeing 3.x kernels with OpenVZ integrated.

This is all I could find about the future of OpenVZ, and it's from last summer:
http://lwn.net/Articles/507997/

I guess instead of waiting for mainline bcache to trickle down, the more efficient route for achieving block-level SSD caching functionality in Proxmox would be to either:
- compel Velobit to create a loadable kernel module of Hypercache for Proxmox (they have a free and a paid version), or
- do the same with Facebook flashcache, provided it actually proves to be beneficial, because at the moment the benchmarks do not back it up.

I'm not sure if bcache could be ported at all to 2.6.32.

I will contact VeloBit and ask them what they think of supporting Proxmox, it their solution works they could even get some sales off of us.
 
Last edited:
We are already using hardware RAID controllers. The problem with using controller-supported SSD caching is that you have to throw out your otherwise perfectly capable and fast controllers just to buy into a software "feature", not to mention it's only available on the most expensive models.

We currently use Adaptec 6805E, 8 port RAID10: 250 USD
Caching feature would require Adaptec 6805Q: 900 USD
Multiply that by the number of servers...

I can understand that you need fast IO. Using the 6805E is not that fast, only 128mb cache and there is no cache protection - I would never use such a controller.

If you want performance and of course, data protection you need cache protection. if you want additionally SSD caching, latest adaptec 7Q series supports with standard SSD (eg. intel, samsung)
http://www.adaptec.com/en-us/products/series/7q/
 
I can understand that you need fast IO. Using the 6805E is not that fast, only 128mb cache and there is no cache protection - I would never use such a controller.

If you want performance and of course, data protection you need cache protection. if you want additionally SSD caching, latest adaptec 7Q series supports with standard SSD (eg. intel, samsung)
http://www.adaptec.com/en-us/products/series/7q/

Before deciding on the 6805E, we have tested several different controllers and even md-raid with RAID10, and the ONLY REAL performance factor seemed to be the number and type of hard drives.

- regarding controller cache size: our environment is not write-heavy, so there is no real performance difference between a 128MB and an 512MB cache controller
- regarding battery cache protection: as PVE from 2.x is reasonably stable, and we have UPS power and correctly mounted journaling filesystems, cache protection is not something I would pay a 4x price premium for... in fact we didn't have a SINGLE unscheduled reboot since 2.3, and even before that ext4 and fsck proved reliable.

As I wrote above, paying 4x as much for a RAID controller is not justifiable to us, because the ONLY feature we really need from it is SSD caching, which is already freely available in software form. We don't care about RAID5, battery backup or any other overpriced / overhyped marketing features.

Tom you always seem to advocate throwing more money at hardware, which is in stark contrast of the fact that you develop Proxmox, a collection of powerful free software technologies that enable reliable operation of virtual machines on commodity hardware. I respect your opinion, but see it differently.

I wrote that many times before: just as DRBD, fuse, corosync and other technologies improve reliability without a hardware investment, ZFS, software RAID and block-level SSD caching could do the same for IO performance. I'm sorry you don't see it that way, but you guys never regarded IO performance an important thing - just think about the fact that you needed 3 years to turn away from the clusterfuck CFQ scheduler.
 
Last edited:
Omnios with a zpool simply rocks for shared storage. My cheap home build 750$ SAN kicks ass:cool:

Last IO Summary for pool vMotion, fileserver.f, 60 s: 1637603 ops, 26858.811 ops/s, (2442/4885 r/w), 650.2mb/s, 257us cpu/op, 4.3ms latency

Code:
NAME     SIZE  Filesize    Initial Write      Re-write         Read            Re-read        Reverse Read
vMotion  928G    1g          453.9 MB/s   505.2 MB/s  1346.6 MB/s     1283.5 MB/s      867.1 MB/s  

Stride read        Random read      mixed workload        Random Write     Fwrite           Fread
857.4 MB/s          679.6 MB/s            496.2 MB/s               89.5 MB/s     563.3 MB/s    1158.5 MB/s
 
We're getting a little off topic, but coult you show us a zpool status to see the array?
 
zpool status vMotion
pool: vMotion
state: ONLINE
scan: scrub repaired 0 in 0h3m with 0 errors on Tue Jun 11 03:18:39 2013
config:


NAME STATE READ WRITE CKSUM
vMotion ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
c1d1 ONLINE 0 0 0
c2d1 ONLINE 0 0 0


errors: No known data errors
 
Those performance values with a single mirror? It doesn't add up for me. What is the sync property set to, and what drives, SSD?

Anyway, on topic, it would be nice to be able to easily integrate bcache, but it's completely understandable why the proxmox team don't want to support it. I would't push it so hard, and there's still the possibility to build raid arrays on top of SSD drives to serve the most demanding workloads. With adequate redundancy for cheap consumer grade drives it might still pay off.
 
Thank you, that would explain it - but it's completely useless for production.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!