First Time CEPH User!

wahmed

Famous Member
Oct 28, 2012
1,118
46
113
Calgary, Canada
www.symmcom.com
Hello all,
After some tests and trials i finally have a working CEPH Block Device cluster as shared storage for Proxmox. Two Words..... Very Impressive!
I still have some hardcore testing to do to see the performance, but i have one question..... Can i not use CEPH RBD to store backups, ISOs or can only be use as block device such as iSCSI/LVM?
 
Hi, I'm going to test it too, probably today with 3 VM with ubuntu server 64, 12.04, just to understand how it works and is managed.
May I ask you what hardware are you using, what performances do you get, what kind of VM are you planning to use/have tested, and if you had some problems not explained in the wiki that you had to solve? (btw, you could edit the wiki in this case ;P))
Thanks a lot!
 
Hi, I'm going to test it too, probably today with 3 VM with ubuntu server 64, 12.04, just to understand how it works and is managed.
May I ask you what hardware are you using, what performances do you get, what kind of VM are you planning to use/have tested, and if you had some problems not explained in the wiki that you had to solve? (btw, you could edit the wiki in this case ;P))
Thanks a lot!

I think i cheated a little bit with my setup. :) I used my existing Proxmox cluster to setup the CEPH Monitors and MDS. The proxmox cluster right now uses OmniOS+Napp-IT shared storage. I wanted something where i can monitor the server hosts if they are online or not.
My proxmox cluster looks like below:
Node 1 = Intel i7 3770-32GB RAM = Serves all existing 14 VMs
Node 2 = AMD Athlon X2-8GB RAM = Serves CEPH Admin VM(Ubuntu)
Node 3 = AMD Athlon X2-8GB RAM = Serves CEPH Monitor 1 VM(Ubuntu)
Node 4 = AMD Athlon X2-8GB RAM = Serves CEPH Monitor 2 VM(Ubuntu)
Node 5 = AMD Athlon X2-8GB RAM = Serves CEPH Monitor 3 VM(Ubuntu)
Node 6 = AMD Athlon X2-8GB RAM = Serves CEPH MDS VM(Ubuntu)
Node 7 = Intel i5 3570-28GB RAM-4x3TB HDD = Serves CEPH OSDs
Node 8 = AMD FX-4100-28GB RAM-5x3TB HDD = Serves CEPH OSDs

In Node 7 and 8 i installed CEPH on Proxmox Debian OS. This setup allows me to use my existing cluster without buying another set of hardwares also allows me to see which CEPH nodes might be offline. Hope this makes sense. I have not tested the performance yet. In the process of setting up some test VMs and see how it goes. If all tests goes success, i will move the CEPH monitors from VM to actual Proxmox nodes and use the shared OmniOS storage for backup purpose only.

I hope all these makes sense. If there are enough interest i will keep everybody updated and may be draw up some network diagram to show how all these came together and the progress.
 
My CEPH Cluster is finally up and running and all VMs has been migrated to CEPH from OmniOS-NappIT. I did our very first backup last night and i must say i am impressed. I have been struggling with backup speed for the longest time. But last night it did the backup faster than ever before using full gigabit network. Since i cannot store backup on CEPH, my primary backup destination for now is the former OmniOS shared NFS storage. CEPH is definitely something not for average home users since more than one physical machine needs to be dedicated to the cluster. But for an organization it is as good as it gets. To test out the resiliency of the CEPH cluster, i yanked out the power cord from node 8(see my earlier post for node assignments), all my VMs were still running! Although i noticed a tiny bit slowing down, but that is a trade off can be easily live with. NFS Storage solution like FreeNAS, OmniOS+NappIT is good but do not provide any node redundancy out of the box.
I should add setting up CEPH does require some great deal of study and learning of the documentation. I made dozens of mistake. I redid CEPH cluster many dozens times so far because of some mistakes i made in the process. But i learned a lot. To learn the CEPH system i started out with Virtual machines so i did not have to spend on dedicated machine during my learning process. If anybody wants to learn CEPH i recommended start with bunch of Virtual machines and just try to setup a virtual CEPH cluster. Obviously i used my existing Proxmox VM clusters for that and testing was 100% successful in virtual environment.
 
Just wanted to update the community with latest happenings of my CEPH experience.

CEPH continues to perform excellent backup to OmniOS NFS shared storage. I am only noticing slow down when any VM writing data to CEPH storage. My latest attempt of yanking power cord from one of the CEPH storage host still left teh system unbroken. :)
I would like to eliminate the OmniOS NFS shared storage only if i can find out how to store backups on CEPH storage. Any ideas?

Should i Snapshot all the VMs daily basis then just do full back ups on OmniOS NFS storage once a month?

How can i test the actual R/W speed CEPH providing to all VMs? I heard of Bonnie++ but never tried it. Since RBD is block based storage how can i get real R/W speed?

I have installed htop on both CEPH OSD hosts to monitor how much CPU and Memory they are using so i do not max out. Very good program to quickly see graphical representation ofCPURAM.
 
Code:
ceph -w
on a ceph-node you see the write and read speed (and a lot other things).

:) I used that command numerous times, completely forgot that it also shows read/write speed. Thanks! Could you tell me what is the XXXop/s is? Higher the activity, higher op/s?
 
Update
======
It now has been 22 days based on CEPH Cluster Uptime. Even through all the Proxmox 3.1 upgrades on nodes, mass VM migration to different nodes back and forth for successful upgrade of Proxmox, CEPH still keeps working without issue.Few days ago CEPH cluster nodes also went through major release upgrade while the cluster was online. CEPH nodes did not need restarts so no Proxmox nodes offlining was needed. I somewhat stopped using CEPH FS and stick purely with RBD. CEPH FS Performance just not up to the par yet. So i still have a small FreeNAS NFS storage to store all ISOs and weekly backup.There is nothing to complaint about the speed of Proxmox VMs backup.

Overall very happy with the combination of Proxmox+CEPH. They are playing together very nicely.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!