qemu with RBD support

hverbeek

Member
Feb 14, 2011
40
1
8
I'm trying to find out, which version of qemu is behind 'pve-qemu-kvm: 1.0-8', and more importantly, if it has RBD support. I'd really try to test out qemu with a ceph backend. Any pointers? Thanks!
 
I have upgraded our 4-node proxmox cluster to PVE 2 and am running a ceph backend.

Unfortunately, it looks like the version of qemu shipped with PVE 2 does not have rbd support:

Code:
# rados mkpool vmimages
# rados lspools
data
metadata
rbd
vmimages

# qemu-img create -f rbd rbd:vmimages/disk1 10G
qemu-img: Unknown file format 'rbd'

Would you be able to enable rbd support in qemu and make it available for testing? Thanks!
http://ceph.newdream.net/wiki/QEMU-RBD#Building
 
I'm hoping to set up and run Proxmox 2.0 and using Ceph in my lab for testing.

hverbeek: Can you describe the setup of your environment in greater detail?
 
Looks like RDB was only included in the Linux kernel in 2.6.37 or later. My Proxmox host shows kernel 2.6.32-11-pve. Having RDB for testing going forward would be great.

Alternatively, having access to Sheepdog would be great too. What's the status of Sheepdog in Proxmox? If I'm not wrong it was originally slated for "tech preview" in 2.0. Is it on hold or being re-evaluated in favor of Ceph anbd/or GlusterFS?
 
Are you considering Ceph at all instead? It seems Ceph is fairly stable and also part of qemu. Or is there some show stopper when it comes to Ceph as an alternative?
 
Are you considering Ceph at all instead? It seems Ceph is fairly stable and also part of qemu. Or is there some show stopper when it comes to Ceph as an alternative?

AFAIK there is also no stable ceph relaease (we use kernel 2.6.32).
 
AFAIK there is also no stable ceph relaease (we use kernel 2.6.32).

Hi dietmar, about CEPH, for the rados block part, we don't need a recent kernel, only compile qemu-kvm with rados library.
Rados is stable, I'm in contact with Intank, the new company created by ceph creators. They provide support for rados cluster,And it's stable.
I'm going to build a production rados block cluster with their help, So I think I'll add support to rados block to proxmox 2.0. (waiting for storage modules ;)
 
Hi dietmar, about CEPH, for the rados block part, we don't need a recent kernel, only compile qemu-kvm with rados library.

But rados is 'only' the client side?

Rados is stable, I'm in contact with Intank, the new company created by ceph creators. They provide support for rados cluster,And it's stable.
I'm going to build a production rados block cluster with their help, So I think I'll add support to rados block to proxmox 2.0. (waiting for storage modules ;)

Interesting. Will try to commit the plugin code asap ;-)
 
hverbeek: Can you describe the setup of your environment in greater detail?
Sorry for the late response, I was offline for a few days. As others have pointed out in their responses here, I'm only interested in using Ceph as a storage backend for qemu. I am not interested in using Ceph as a general purpose filesystem (where clients would mount with a 2.6.37+ kernel client or a fuse-client).

My setup is very simple and cheap: I have a PVE 2.1 cluster consisting of 4 hosts (native Debian squeeze installation, then PVE 2.1 on top). The hosts are interconnected via 1GigE and each host has a bunch of disks behind a HW Raid controller. At the german hosting provider Hetzner, you can get such a setup for a few hundred Euro per month. I have installed the Ceph core (OSD, MON and MDS) on all hosts (4 OSDs, 3 MON, 1 MDS, later I'll add more standby MDSs).

The beauty of this setup is the low cost combined with powerful availability, scalability and automatic re-organisation. Obviously I can't say anything about performance yet. Beats DRBD+NFS in terms of simplicity and cost any day. Beats GlusterFS in cost and re-organisation. Hopefully beats sheepdog in performance. I just need qemu to be compiled with `--enable-rbd`..... :)

As someone has pointed out, Qemu's RBD support has been stable since qemu 0.13.1. Yes, the ceph backend is still evolving, but if PVE would "support" it (for advanced users, on the commandline only; "support" as in "enable it", not "provide commercial support") ceph could get a boost in usage, leading to better stability!
 
I'm going to build a production rados block cluster with their help, So I think I'll add support to rados block to proxmox 2.0. (waiting for storage modules ;)

I found it not very complicated to build the cluster. But what do you mean by "storage modules"?
 
Thanks hverbeek for the info. That's pretty similar to what I had in mind. Unfortunately I'm not in Germany. :S I'll have to test it on my test cluster. My current setup is built directly on Proxmox ISOs, though.