Using SSD's and HDD's

arduino

New Member
Feb 10, 2014
6
0
1
Our company recently left ESXI. It doesnt support many of the things I need.

Im running a Supermicro X9DR3-F , 2x Xeon 2620's , 4 x 1TB 2.5 HDD's in non-raid and 64 GB RAM. It works fine with proxmox the only issue I am having now is disk I-O issues .

I cannot seem to get my on-board RAID working. I am aware proxmox does not support on-board RAID but I did read somewhere that the Intel C606 Chipset was going to be supported ( maybe already is?).

I just ordered an Adaptec 2405 card and 4 x 120GB SSD's. The SSD's were not for proxmox but ended up we have no need for them and now I've got them sitting here.

My question is : How can I use these with proxmox along side HDDs?

I know I can just use LVM and use them that way , or raid etc. I was curious if there is a sort of "hybrid drive" I can accomplish with these.


I did setup the drives and put 20 Windows 7 machines on them . They run , but I am only getting 160MB/s VS the 450MB/s theyre rated for. I suspect this is either due to LVM ( which I was using in this case) or the SAS controller could be causing me issues.
 
Our company recently left ESXI. It doesnt support many of the things I need.

Im running a Supermicro X9DR3-F , 2x Xeon 2620's , 4 x 1TB 2.5 HDD's in non-raid and 64 GB RAM. It works fine with proxmox the only issue I am having now is disk I-O issues .

I cannot seem to get my on-board RAID working. I am aware proxmox does not support on-board RAID but I did read somewhere that the Intel C606 Chipset was going to be supported ( maybe already is?).

I just ordered an Adaptec 2405 card and 4 x 120GB SSD's. The SSD's were not for proxmox but ended up we have no need for them and now I've got them sitting here.

My question is : How can I use these with proxmox along side HDDs?

I know I can just use LVM and use them that way , or raid etc. I was curious if there is a sort of "hybrid drive" I can accomplish with these.


I did setup the drives and put 20 Windows 7 machines on them . They run , but I am only getting 160MB/s VS the 450MB/s theyre rated for. I suspect this is either due to LVM ( which I was using in this case) or the SAS controller could be causing me issues.
Hi,
lvm hasn't much overhead. You use the SSD directly or as disk for an raid-volume? How do you measure?

BTW. the 450MB/s is the top-value (like continous 4k block reading)

Because SSDs and HDDs - I use different volumes on the raid - like 4*SAS as raid-10 and 2*SSD raid1 as lvm-volumes (in my case with drbd between volume and lvm).
So I can choose between SAS-speed ans SSD-speed (for DB).

Udo
 
I have a few of those same supermicro boards, they work well.

We have some setup with an Areca 1880 and 12 7200rpm disks and some others setup with Areca 1882 and 22 SSDs.

In both cases we are using DRBD and LVM just like udo described.

If you are benchmarking inside the VM there are some limitations in KVM that will limit random IO. There is a single IO thread in KVM which prevents the VM from utilizing all of the available IO. While a single VM may be limited multiple VMs can perform IO simultaneously where the total IO can reach the native speed.

When benchmarking sequential IO I am able to saturate the pcie bus to my raid card from within the VM.

If IO performance is important use a RAID card with BBU.
 
I know I can just use LVM and use them that way , or raid etc. I was curious if there is a sort of "hybrid drive" I can accomplish with these.

If by "hybrid drive" you mean caching hot data on the SSD from the bigger HDD based array, and doing all of this in software, then there are currently two options for you, but both require extensive manual setup.
Later this year if we finally get a more recent kernel (3.1x hopefully) we will have several built-in SSD caching modules to choose from (namely EnhanceIO and bcache), and hopefully the Proxmox devs will integrate using these into setup.

1. ZFS
ZFS supports using SSDs for it's L2ARC (second level read cache) and ZIL (write cache / log) devices, accelerating performance considerably.

Explanation of ZIL and L2ARC:
https://blogs.oracle.com/brendan/entry/test

Some installation help:
https://github.com/zfsonlinux/pkg-z...ian-GNU-Linux-to-a-Native-ZFS-Root-Filesystem
https://forum.proxmox.com/threads/17737-ZFS-based-small-scale-configuration

2. Flashcache
Flashcache is an SSD cache software developed by Facebook, and it's recent version has decent performance. The biggest problem with it however is that it's difficult to filter out vzdump's random reads during OpenVZ container backups, therefore you will likely end up overwriting most of your read cache on a daily basis (or however often you backup). This of course is no problem for KVM guests, as large sequential reads can be easily excluded from the cache.

Flashcache's github:
https://github.com/facebook/flashcache/

Some installation info:
http://florianjensen.com/2013/01/02/adding-flashcache-to-proxmox-and-lvm/
http://forum.proxmox.com/threads/11388-VZDump-with-flashcache-on-top-of-LVM

If you are able to experiment with any of these, please share your experiences and benchmarks!
 
Last edited: