Proxmox VE Ceph Server released (beta)

Hello,

I would like to run the OpenVZ container on the Ceph Nodes. Is this possible. I don't see a way to do that? When I tried to create the container, ProxMox only give option for local drive.

It would be great to be able to put the container on the Ceph storage.
 
Thank you Udo.

Is there a way to see how much bandwidth the Ceph Nodes are using up? The only way I can think of right now is go to the switch and use the web gui bandwidth monitoring tool.

It would be nice to be able to see a graphical performance bar for the Ceph nodes from ProxMox GUI.
Hi,
I use ceph-dash as gui for the ceph-cluster.
But shows the statiscic of the whole cluster, not for single nodes...


Udo
 
Hi,
I use ceph-dash as gui for the ceph-cluster.
But shows the statiscic of the whole cluster, not for single nodes...


Udo

Thank you Udo. Screenshot looks great, but I couldn't install it on the Proxmox (debian host). Tried to follow the documentation but didn't work. Any pointer on how to install this would be much appreciated.
 
Openvz cannot be used on RBD. You have to setup cephfs for that.

Sent from my SGH-I747M using Tapatalk

Thank you symmcom. Appreciate it. Since Cephfs is still in beta and not production ready, is there an alternative? Have any one tried GlusterFS?
 
Is there a way to see how much bandwidth the Ceph Nodes are using up? The only way I can think of right now is go to the switch and use the web gui bandwidth monitoring tool.

It would be nice to be able to see a graphical performance bar for the Ceph nodes from ProxMox GUI.

In addition to what Udo shared about ceph-dash, you can also use nload command line tool to monitor a network interface. The display looks as the following.
nload.png

The command format is : # nload ethX

This shows bandwidth usage of a particular network interface in real time. In the example above it is showing bandwidth usage for eth2 which is part of Ceph network.
 
Thank you symmcom. Appreciate it. Since Cephfs is still in beta and not production ready, is there an alternative? Have any one tried GlusterFS?

I think Beta is relative word here. More like a precautionary measure from Ceph developers side and thats my opinion. I have been using CephFS for a while now without loss of data or inconsistency. I use it mostly to store ISOs, Templates and OpenVZs. I will not put KVMs on CephFS even if it was very stable and production ready only because RBD has better performance than CephFS. CephFS is a file system whereas RBD is blocks without additional software layer for file system like CephFS.

Gluster is excellent storage system. But it somewhat gets messy to maintain to separate storage system. Gluster and Ceph both are very powerful storage technology. Using both would be like using Proxmox and VMWare side by side. In reality this does not make very much sense unless specifically required.
 
Openvz cannot be used on RBD. You have to setup cephfs for that.

Does quota and Proxmox-UI (show disk usage etc.) still work with OpenVZ on CephFS?
I once tried XFS, and had all sorts of problems, even with patched PVE perl modules (ok, that was about Prox V1.9 or so).
 
I was wondering, with ceph-server being a tech preview still, would it possibly be more stable to just install the normal ceph packages and forgo the proxmox GUI integration? Possibly when the RHEL7 3.10 kernel comes around...? I know theres a deadlock kernel issue between locally mapping an RBD on a host thats running a monitor, but afaik qemu uses librbd (userspace, therefore "slow") to interface with the RBDs instead of mapping them in kernel space, so the vanilla ceph packages should just work I presume?

Also, while we're at it: does proxmox actually use the 'ceph' CLI to interface with ceph clusters you have added as storage (not ceph-server, a distant cluster)? As in: does ceph-common on the proxmox nodes need to be of the same branch as the ceph cluster? Asking since the ceph CLI has changed significantly between cuttlefish and dumpling
 
I was wondering, with ceph-server being a tech preview still, would it possibly be more stable to just install the normal ceph packages

we use the 'normal' ceph packages.

Also, while we're at it: does proxmox actually use the 'ceph' CLI to interface with ceph clusters you have added as storage (not ceph-server, a distant cluster)? As in: does ceph-common on the proxmox nodes need to be of the same branch as the ceph cluster? Asking since the ceph CLI has changed significantly between cuttlefish and dumpling

We use standard ceph packages, and standard ceph API.
 
1- the drivers are built into the kernel .

2- we use infiniband for cluster / ceph network. vmbr0 etc for vm's

3- we have not got to speed testing ceph yet...

4- for next msg in thread we use these model cards, per cli command lspci MT25208 , MT25418 and MT25208 .

Code:
02:00.0 InfiniBand: Mellanox Technologies MT25418 [ConnectX VPI PCIe 2.0 2.5GT/s - IB DDR / 10GigE] (rev a0)

03:00.0 InfiniBand: Mellanox Technologies MT25208 [InfiniHost III Ex] (rev a0)

ib is very easy to set up. see wiki and ask questions, but use a diff thread please.

I couldn't find any Mellanox cards in quantity so opted for Qlogic Infiniband cards. What a nightmare. Proxmox won't recognize and Qlogic said driver only for Redhad Linux or SUSE. I am out of luck with 14 of these.
 
Hello,

Not sure what I am doing wrong here but Ceph nodes keep getting the "can't connect to cluster" message.

3 Proxmox hosts each have 3 NIC cards.

vmbr0 = Eth0 = Proxmox host (assigned ip 5.5.5.10/24, gateway 5.5.5.10
vmbr1 = Eth1 = VMs (no ip assigned to subnet)
vmbr2 = Eth3 = Ceph nodes (assigned ip 10.10.10.3/24 -- .4 for second Ceph node, .5 for third Ceph node).

I went through the installation procedure as suggested by Proxmox wiki doc. Then ran the command:
pveceph init --network 10.10.10.0/24

Everything seems fine. I was able to create the monitors, pool, storage, and copied keyring.

However, when tried to create the VMs, it said cannot connect to rdb.

Browsing to the storage name on each host give "cannot communicate" or "cannot connect to the cluster".

Any help would be greatly appreciated. Thank you.
 
I know you already said you created keyring. But did you make sure the name of the keyring matches the name of the storage you created? For example, if you named the ceph storage "ceph-rbd' through Proxmox GUI, then the ceph keyring should be copied as ceph-rbd.keyring and copied into /etc/pve/priv/ceph
 
I know you already said you created keyring. But did you make sure the name of the keyring matches the name of the storage you created? For example, if you named the ceph storage "ceph-rbd' through Proxmox GUI, then the ceph keyring should be copied as ceph-rbd.keyring and copied into /etc/pve/priv/ceph

Thank you. That was one of the steps which I carefully have done. I notice even before creating the VMs, I tried to browse the storage name and the error was already there.

I am reinstalling the servers for the 5th times. Will make sure I have the key ring correctly copied. The reason for multiple installs is every time it does this, the OSDs needed to be wiped out from the hard drive controller. I've tried fdisk (option d, option w), it did not clear out the partition. The only solution was to reload.

Does it matter if I name the storage name same as the pool and the cluster name?
 
Last edited:
It does not matter at all. Even with same name they are still separate entities.

On of the Proxmox node if you run #ceph -s what result do you get?


root@prox2:~# ceph -s
2014-06-25 09:40:24.597315 7f44704fc700 0 -- :/1010268 >> 10.10.10.3:6789/0 p ipe(0xc990d0 sd=3 :0 s=1 pgs=0 cs=0 l=1 c=0xc99340).fault
2014-06-25 09:40:27.597304 7f44703fb700 0 -- :/1010268 >> 10.10.10.4:6789/0 p ipe(0xc9c9e0 sd=4 :0 s=1 pgs=0 cs=0 l=1 c=0xc9cc50).fault
2014-06-25 09:40:30.597619 7f44704fc700 0 -- :/1010268 >> 10.10.10.3:6789/0 p ipe(0x7f446c001d90 sd=4 :0 s=1 pgs=0 cs=0 l=1 c=0x7f446c002000).fault
2014-06-25 09:40:33.597912 7f44703fb700 0 -- :/1010268 >> 10.10.10.4:6789/0 pipe(0x7f446c004010 sd=4 :0 s=1 pgs=0 cs=0 l=1 c=0x7f446c004280).fault
2014-06-25 09:40:36.598206 7f44704fc700 0 -- :/1010268 >> 10.10.10.3:6789/0 pipe(0xc9ba30 sd=4 :0 s=1 pgs=0 cs=0 l=1 c=0xc9bca0).fault
2014-06-25 09:40:39.598356 7f44703fb700 0 -- :/1010268 >> 10.10.10.4:6789/0 pipe(0xc9e720 sd=4 :0 s=1 pgs=0 cs=0 l=1 c=0xc9e990).fault
2014-06-25 09:40:42.598711 7f44704fc700 0 -- :/1010268 >> 10.10.10.3:6789/0 pipe(0x7f446c003ca0 sd=4 :0 s=1 pgs=0 cs=0 l=1 c=0x7f446c003f10).fault
^CError connecting to cluster: InterruptedOrTimeoutError

I am reinstalling the server. This time will do what works before. Use the same subnet as the proxmox host.
 
root@prox2:~# ceph -s
2014-06-25 09:40:24.597315 7f44704fc700 0 -- :/1010268 >> 10.10.10.3:6789/0 p ipe(0xc990d0 sd=3 :0 s=1 pgs=0 cs=0 l=1 c=0xc99340).fault
2014-06-25 09:40:27.597304 7f44703fb700 0 -- :/1010268 >> 10.10.10.4:6789/0 p ipe(0xc9c9e0 sd=4 :0 s=1 pgs=0 cs=0 l=1 c=0xc9cc50).fault
2014-06-25 09:40:30.597619 7f44704fc700 0 -- :/1010268 >> 10.10.10.3:6789/0 p ipe(0x7f446c001d90 sd=4 :0 s=1 pgs=0 cs=0 l=1 c=0x7f446c002000).fault
2014-06-25 09:40:33.597912 7f44703fb700 0 -- :/1010268 >> 10.10.10.4:6789/0 pipe(0x7f446c004010 sd=4 :0 s=1 pgs=0 cs=0 l=1 c=0x7f446c004280).fault
2014-06-25 09:40:36.598206 7f44704fc700 0 -- :/1010268 >> 10.10.10.3:6789/0 pipe(0xc9ba30 sd=4 :0 s=1 pgs=0 cs=0 l=1 c=0xc9bca0).fault
2014-06-25 09:40:39.598356 7f44703fb700 0 -- :/1010268 >> 10.10.10.4:6789/0 pipe(0xc9e720 sd=4 :0 s=1 pgs=0 cs=0 l=1 c=0xc9e990).fault
2014-06-25 09:40:42.598711 7f44704fc700 0 -- :/1010268 >> 10.10.10.3:6789/0 pipe(0x7f446c003ca0 sd=4 :0 s=1 pgs=0 cs=0 l=1 c=0x7f446c003f10).fault
^CError connecting to cluster: InterruptedOrTimeoutError

I am reinstalling the server. This time will do what works before. Use the same subnet as the proxmox host.

Does not seem like your Ceph nodes are talking to each other. Before you fully reinstall, i would suggest to try to make it work. There might be some underlying issue preventing this.

Can you ping other Ceph nodes on Ceph subnet?
 
Does not seem like your Ceph nodes are talking to each other. Before you fully reinstall, i would suggest to try to make it work. There might be some underlying issue preventing this.

Can you ping other Ceph nodes on Ceph subnet?

Thank you. I already did the reinstall before seeing your message. But I can easily simulate that environment again.

The strange thing is if I use the subnet same at the proxmox hosts, it works like a charm.

Yes, I was able to ping all nodes on the Ceph subnet. For the life of me, I could not figure out what it is. If I exchange that subnet and use for the VMs, they also works fine.

I even tried other subnets. Same problem. The only subnet it like is the proxmox host subnet.

Thanks for your help.
 
How many MONs have you created ?
Ceph subnet on different switch?
1 GB LAN or 10GB ?
Whats the content of ceph.conf?

3 MONs created.
Ceph subnet on different switch. Also tried on the same switch under separated VLAN.
1GB LAN.
I already reloaded so the content looks different now. I will need to reloaded it again to simulate the error one more time.

It is working like a charm under the same subnet as the proxmox host. But I need to find out why it isn't working.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!