Proxmox VE Ceph Server released (beta)

I must be missing something basic. I created 3 proxmox virtual machines under vmware (for testing). The cluster looks good, the ceph status tab on any given host says health_ok. Each host has an OSD showing for a total of 3X the space of each disk. It all claims to be okay (and I have created an rdb pool too with no errors). Yet there is no mount point showing (apart from the /var/lib/ceph/osd/ceph-X xfs mountpoints. When I create the rdb storage in the datacenter, it gives no option to set any mount point (so I assumed it is implicit?) Anyway when it's all said and done, if I try to create a VM, when I get to the storage tab, the box is greyed out because proxmox thinks there is nowhere to put the data. The ceph howto on the wiki was pretty scanty, but I'm pretty sure I followed all the instructions. What am I missing? Thanks... I noticed also that if I go to the Status tab under ceph for different nodes, it initially shows as 'Quorum: no' and eventually changes to okay status. Is this just a an artifact of the UI? Or an indication something is wrong?
 
Last edited:
I think I might have seen a hint on a different thread. Since I am testing these virtualized, there is no kvm support (I can set the vmware servers to support a nested hypervisor but hadn't done so...) The thread implied that kvm supports the rdb storage directly - so if there is no kvm, would that explain the lack of storage? Another data point: a different thread had suggested looking at the output of 'pvesm status' to diagnose missing storage. When I try that, it hangs...
 
Last edited:
Deafening silence. Off to look for other solutions, I guess...
Hi,
with ceph storage there isn't an mount point (the rbd is used for kvm directly). cephfs isn't stable yet (and afaik not 100% posix compatible).
Also not clear for me, what you want test with an virtualized hypervisor and virtualized storage for virtualisation. If that don't work smooth, which finding do you will get?

For test it's makes much more sense to take three dektop boxes with some sata-hdds.

Udo
 
I don't have 3 spare boxes. I'm not interested in performance, just learning how this works, and virt should work for that. Regardless of that, I understand now there is no mount point, but this doesn't explain why when I tried creating a VM, no storage is presented to use?
 
I don't have 3 spare boxes. I'm not interested in performance, just learning how this works, and virt should work for that. Regardless of that, I understand now there is no mount point, but this doesn't explain why when I tried creating a VM, no storage is presented to use?

You need to add the storage in Datacenter -> Storage -> Add -> RBD. You assign the pool with the name you specified for the ceph pool.
 
Maybe I was not clear. I did all of that. Per the HOWTO. Yet when I create a VM, and am moving through the wizard, when I get to the storage part, there is nothing to choose from.
 
One possible reason for that would be that the Ceph cluster does not have quorum. you mentioned it stating HEALTH_OK earlier, but maybe that changed?
 
It seems that doing a full clone is quite intense and you loose sparse on images. Any reason for not using rbd flatten instead of qemu-img convert? i.e, first clone the image and then flatten it.

Cheers,
Josef
 
One possible reason for that would be that the Ceph cluster does not have quorum. you mentioned it stating HEALTH_OK earlier, but maybe that changed?

Weird, I could have sworn I replied to this yesterday :( Anyway, I tried several times, including bare seconds after seeing ceph reporting "HEALTH_OK". I pretty much gave up at that point...
 
Absolutely great things to read about Proxmox - my congrats to the Proxmox devs and contributors.

For my understanding (and others too?) who are not as familiar with Ceph as you guys:
I read that Ceph needs at least 2 copies for data safety but 2+ copies more for HA (says: http://ceph.com/docs/master/architecture/),
however the Proxmox wiki suggests 3 nodes as the minimum for Ceph.

Now I understand that Ceph, for production use wants > 2 copies, that's fair enough, I do see the point.
However: Can it be tested and configured with only 2 nodes?

I'd have 2 servers available for some testing, but not 3 - for production that would be possible though.

Do not confuse *copies* and *nodes*. At least 2 copies is recommended because in case of a node failure, you always get a backup copy of your files in the cluster.

The number of nodes is different, because to get a *real* cluster environment, you always need 3 nodes minimum. This is because of the functionning of *quorum*. If a node fails, the two remaining nodes can work and take decision together. If you only have 2 nodes, you get a brain-split situation where each node does not know what to do (each one is alone). Search for "quorum" to get more information ;)
 
I would really love to have this, but doesn't it depend on a stable cephfs?

CephFS might be finally called stable in Q3 this year:

Code:
<scuttlemonkey> I think the rough estimate that was given (napkin sketch) was sometime in Q3
<scuttlemonkey> but that's predicated on a lot of other things happening
<scuttlemonkey> from a stability standpoint we're actually looking pretty good (at last observation)

even though cephfs is not called stable yet, it can already be considered as such (for tests) since all thats really lacking at the moment is some fsck tool... is what I'm given to understand
 
Today I installed a new node in my cluster, and "pveceph install" installed "dumpling" version of Ceph (0.67) instead of Emperor (0.72) before, is it normal ?

Code:
cat /etc/apt/sources.list.d/ceph.list

deb http://ceph.com/debian-dumpling wheezy main
 
I think this was changed. Most likely because dumpling is the latest fully supported release (until firefly is 1 month past release). emperor is only a community release
 
Thank you Proxmox Team!

The Ceph integration is great. Works perfect out of the box! :-)
With the next update including firewall the proxmox will handle everything in the infrastructure needed :-)
I have installed a test proxmox CEPH cluster on our proxmox cluster. just 3 and then 4 nodes with 2 osds each. performance is ok for just being a virt install on existing proxmox cluster and only 1gbit network. i can still run a win2008 server for testing.
iops are like 3 standard sata disks and cristal diskmark reads says 40mb for reading (which seems to be the limit of 1 gig network if osds and mons are on the same 1g network..?)
i used it to test cases like removing and adding osds. turning of 1 node or even shutdown the whole cluster (stopping the vms) to check what happens to the ceph cluster. but i could not kill it so far. at the moment it looks really stable. :-)

soon we will purchase our new hardware so meanwhile i will go on with testing.

i have some questions about live performance and configuration:
1) how many standard sata disks do i need to get at least 300MB read/write for a single vm (network will be 10g) with replication of 2 (we will start with 3 nodes))
2) how can i mange different pools in the proxmox gui (is it possible) 1 for ssds and one for sata? how do i tell a newly created osd which pool to join?
3) normally if i do replica = 2 ceph tries to store data in 2 different osd's on differnet hosts?
4) we will start with 10 bigger vms (win2008r2 ts) on the the ceph cluster. normal vms will have peaks (read / write) about 50mb and one will have peaks about 300mb+ (fileserver) we dont need to much iops (5-10mb 4k is ok) for the moment
5) how to configure the network to have: one network for the (extisting) proxmox cluster; one network for monitors; and one for osd's? in the howto on proxmox wiki it is just 1 network for osds and monitors...
6) what ratio are you using for osds and ssd journal? ceph talsk about around 1:5
7) what kind of ssds? (still have no experience with ssds on our servers...) which perform good?
8) switch config. using 2 10g switches and have 2 network cards for each node with rr. 1 network for osd one for monitors. third network with standars switch for the vms.... has anyone done this allready? or similar?

best regards
philipp
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!