Ceph and SSD's Questions

Nov 3, 2011
26
1
23
Tamworth, United Kingdom
Hello, thank you for any advice in advance.

We are planning on deploying a minimum of 3x hp servers (dl180 25sff) and using Ceph storage over 10gbe which will not be far off what is outlined here http://pve.proxmox.com/wiki/Ceph_Server .

We plan to have a pool on sas drives for low performance/high storage vm's and a pool of ssd drives for more high performance vm's low size.

Big question is, would it be greatly beneficial for us to install proxmox on RAID-1 SSD over RAID-1 SAS?
Also, should we hardware raid (p410i) our drives before configuring in Ceph or set them up as individual drives in Ceph?

We think Ceph is just what we have been looking for as failover SAN/NAS is far more expensive...

Rich
 
Whether proxmox is installed on SSD or SAS is of no importance given the fact that all your VM's will have storage on Ceph.
Regarding hardware raid: I would strongly recommend using hardware raid for your Ceph storage nodes as this will increase performance tremendously. Remember to buy BBU for your raid controllers.

Another thing you should consider is buying a NAS for your backups (a 4 disk Qnap or Synology in raid 5 with dual 1gbe is sufficient). Putting all eggs on the same storage is not recommended.
 
The proxmox "operating system" does not benefit from SSDs. You only speed up the boot process, but with reboots once or twice a year that won't really matter. I.e. you can use the most basic 2x 40GB IDE drives you can find to put proxmox onto it (via RAID1).

SSDs for VM storage via Ceph does increase read/write rates significantly. Do note however, that colocating SSDs and platter disks in the same hardware servers requires intricate knowledge of ceph and CRUSH map manipulation (as the documentation for that is not very detailed / user friendly quite yet).

If those raid controllers are battery-backed, you should try to expose all the disks as single-disk RAID0 devices to Ceph. That's basically a JBOD, using the controllers cache. Either flat out JBOD (no caching) or the mentioned RAID0-configuration is recommended for ceph usage.
 
In addition to your question about proxmox raid-1 I thought I'd add info on ceph.

we have been using ceph in production for about 8 weeks.

our goal is high availability 1-st and not high disk I/O .

we use raid 10 + a hot spare and create one osd per server . it is an ' anti-cephalopod ' set up.

the reason I use that is that when an osd fails data input becomes laggy. using raid 10 + hot spare makes it a lot less likely an osd will fail.

So for us raid 10 + hot spare + 200GB Intel DC S3700 for journal has been working great.

for more info see this ceph mail list thread: https://www.mail-archive.com/ceph-users@lists.ceph.com/msg11661.html
 
Thank you, we already have a 16tb qnap with 2x1gbit LAG, We use this for media/isos and backups. We do have a issue at the moment with our current setup that vm's slow down or even pause all together during the backup but this may be down to us just using 4x 1TB SATA Raid 10 for vm storage.
 
We do have a issue at the moment with our current setup that vm's slow down or even pause all together during the backup but this may be down to us just using 4x 1TB SATA Raid 10 for vm storage.
This is my setup and I have no slow down during backup to my Qnap. My vm storage is in a zpool though running Omnios.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!