Proxmox VE 5.2 and ceph luminous

proxmox_su

New Member
Oct 30, 2018
4
0
1
45
Greetings. Hopefully someone can help me or point me in the right direction on this.

I followed Rico Baro's hour long video on Proxmox VE 4.2 (I used 4.4) and ceph hammer. I followed his video to a T and when all was said and done it worked flawlessly. It was a 3 node cluster with a VM that I could live migrate and also had HA setup. I was a bit shocked to find out that HA does do a reboot when moving from one node to another when that node goes down, and I was curious if a newer version was any different. So my current setup is 5.2 and ceph luminous. I believe these two to be the latest. However my pool gives an error of rbd error rbd list (2) no such file or directory (500). What does this mean? Each node in the cluster has an OSD. I copied the keyring. I believe I've done anything correctly. Looking at my 4.2 cluster all files /etc... are the same. If anyone could point me in the right direction that would be great. Also if there's any documentation on hammer vs jewel vs luminous that would be great. I only assumed luminous was the latest because I found quick documents on hammer to jewel and jewel to luminous. Thanks.
 
Wow not sure how pasting a picture in works so my apologies. The image called ceph_4_4 returns rbd when I type rados lspools. The second picture called ceph_5_2 returns nothing. And that's the one I'm having the problems with. Other than that the other two commands appear to return the same information. Third picture I included is a screen shot of the gui. In it you can see that when Use Proxmox VE managed hyper-converged ceph pool is checked, I cannot enter anything in for the pool. The box always stays red. It's only when I uncheck that box can I enter in the information manually.

ceph_4_4.PNG ceph_5_2.PNG
 

Attachments

  • add_rbd.PNG
    add_rbd.PNG
    47.8 KB · Views: 4
Hi,
do you updated the pve5-host (ceph is working before?), or do you installed ceph first?
When you wrote, you copy the keyring - but which hosts are the mon-hosts?

Please post the output of following commands (use ssh to copy text, not images):
Code:
cat /etc/ceph/ceph.conf
ip a s
ceph -s
ceph mon versions
Udo
 
I installed Proxmox VE 5.2 from the ISO. Then installed ceph. Here are the responses from the commands:

root@machine1:~# cat /etc/ceph/ceph.conf
[global]
auth client required = cephx
auth cluster required = cephx
auth service required = cephx
cluster network = 10.10.10.0/24
fsid = 97c5dc40-1011-472a-b829-99660493b49c
keyring = /etc/pve/priv/$cluster.$name.keyring
mon allow pool delete = true
osd journal size = 5120
osd pool default min size = 2
osd pool default size = 3
public network = 10.10.10.0/24

[osd]
keyring = /var/lib/ceph/osd/ceph-$id/keyring

[mon.machine3]
host = machine3
mon addr = 10.10.10.12:6789

[mon.machine1]
host = machine1
mon addr = 10.10.10.10:6789

[mon.machine2]
host = machine2
mon addr = 10.10.10.11:6789



root@machine1:~# ip a s
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master vmbr0 state UP group default qlen 1000
link/ether 00:0c:29:87:a8:36 brd ff:ff:ff:ff:ff:ff
3: ens37: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether 00:0c:29:87:a8:40 brd ff:ff:ff:ff:ff:ff
inet 10.10.10.10/24 brd 10.10.10.255 scope global ens37
valid_lft forever preferred_lft forever
inet6 fe80::20c:29ff:fe87:a840/64 scope link
valid_lft forever preferred_lft forever
4: vmbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether 00:0c:29:87:a8:36 brd ff:ff:ff:ff:ff:ff
inet 192.168.203.157/24 brd 192.168.203.255 scope global vmbr0
valid_lft forever preferred_lft forever
inet6 fe80::20c:29ff:fe87:a836/64 scope link
valid_lft forever preferred_lft forever



root@machine1:~# ceph -s
cluster:
id: 97c5dc40-1011-472a-b829-99660493b49c
health: HEALTH_OK

services:
mon: 3 daemons, quorum machine1,machine2,machine3
mgr: machine1(active), standbys: machine2, machine3
osd: 3 osds: 3 up, 3 in

data:
pools: 0 pools, 0 pgs
objects: 0 objects, 0B
usage: 3.02GiB used, 117GiB / 120GiB avail
pgs:


root@machine1:~# ceph mon versions
{
"ceph version 12.2.8 (6f01265ca03a6b9d7f3b7f759d8894bb9dbb6840) luminous (stable)": 3
}
 
So I'm finding it interesting that I just watched Rico Baro's newer youtube video in which he uses Proxmox VE 5.1 and Ceph luminous. And you are absolutely correct in this video he setup a pool on the first node which I didn't do. With your reply and watching his video I setup a pool and so far so good. I just got done creating a vm and I'm hoping later today to test out the live migration and HA.

Question though, why in 4.4 with hammer did I have to create the rbd in the datacenter and now with 5.2 and luminous I have to create a pool on the first node? Could you explain or point me in the right direction for some documentation on this? I'm really interested to learn.

Thanks for all your help Udo!
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!