[SOLVED] problems adding rbd

kyriazis

Well-Known Member
Oct 28, 2019
96
5
48
Austin, TX
Hello...

I'm running Proxmox 6.0-7, and I am trying to add a Proxmox-managed RBD location (Ceph-nautilus).

From what I understand, you should check "Use Proxmox VE managed hyper-converted Ceph pool" when adding the RBD pool, however, mine is greyed out! I do have an RBD pool that I just created, and I've also create a CephFS pool that I've successfully mounted.

My Proxmox cluster is currently composed of 7 machines, but I've only installed Ceph on 3 of them (for now). This is more of a proof-of-concept setup. After everything works, the plan is to expand.

Here is some relevant Ceph info:

Code:
root@vis-ivb-07:/var/lib# ceph -s
  cluster:
    id:     ec2c9542-dc1b-4af6-9f21-0adbcabb9452
    health: HEALTH_OK

  services:
    mon: 3 daemons, quorum vis-pve-01,vis-ivb-07,vis-ivb-10 (age 41h)
    mgr: vis-pve-01(active, since 4d)
    mds: cephfs:1 {0=vis-pve-01=up:active}
    osd: 3 osds: 3 up (since 4d), 3 in (since 5d)

  data:
    pools:   3 pools, 224 pgs
    objects: 1.77k objects, 6.8 GiB
    usage:   24 GiB used, 2.0 TiB / 2.0 TiB avail
    pgs:     224 active+clean

root@vis-ivb-07:/var/lib# ceph osd lspools
1 cephfs_data
2 cephfs_metadata
3 rbd_vm
root@vis-ivb-07:/var/lib#

Thank you for any help!
 
My Proxmox cluster is currently composed of 7 machines, but I've only installed Ceph on 3 of them (for now). This is more of a proof-of-concept setup. After everything works, the plan is to expand.

it would be still good to install ceph on the remaining ones, to be sure all use the same (client) version. Installation is enough, no additional setup or the-like is required if you do not want to use a host as ceph "server" (i.e., host OSDs, or Ceph services like Monitors, MDS, ... on those).

From what I understand, you should check "Use Proxmox VE managed hyper-converted Ceph pool" when adding the RBD pool, however, mine is greyed out! I do have an RBD pool that I just created, and I've also create a CephFS pool that I've successfully mounted.

Yes, that understanding is correct, if you setup Ceph with PVE tooling (e.g., all over the Webinterface, or using pveceph CLI tool), then that should be the easiest way.
Did you tried to add the storage on a node which is an active part of the Ceph cluster? That should not be necessary, but there could be a bug regarding that (albeit some distant memory tells me we fixed something for this, I'm not 100 % sure atm.)

Can you please also post the output of
Code:
pvesm status
?
 
it would be still good to install ceph on the remaining ones, to be sure all use the same (client) version. Installation is enough, no additional setup or the-like is required if you do not want to use a host as ceph "server" (i.e., host OSDs, or Ceph services like Monitors, MDS, ... on those).



Yes, that understanding is correct, if you setup Ceph with PVE tooling (e.g., all over the Webinterface, or using pveceph CLI tool), then that should be the easiest way.
Did you tried to add the storage on a node which is an active part of the Ceph cluster? That should not be necessary, but there could be a bug regarding that (albeit some distant memory tells me we fixed something for this, I'm not 100 % sure atm.)

Can you please also post the output of
Code:
pvesm status
?

I'm not sure I understand what you mean by saying "add the storage on a node which is an active part of the Ceph cluster". I added it from Datacenter -> Add -> RBD, in which case there is no node context while I'm adding the RBD storage.

pvesm status before adding:

Code:
root@vis-ivb-07:/var/lib# pvesm status
Name             Type     Status           Total            Used       Available        %
NAS               nfs     active     26407955456         1898880     26406056576    0.01%
ceph           cephfs     active       676835328         7159808       669675520    1.06%
local             dir     active        98559220         2572392        90937280    2.61%
local-lvm     lvmthin     active       366276608               0       366276608    0.00%
local2            lvm   inactive               0               0               0    0.00%
root@vis-ivb-07:/var/lib#

pvesm status after adding:

Code:
root@vis-ivb-07:/var/lib# pvesm status
rados_connect failed - Operation not supported
rados_connect failed - Operation not supported
Name             Type     Status           Total            Used       Available        %
NAS               nfs     active     26407955456         1898880     26406056576    0.01%
ceph           cephfs     active       676835328         7159808       669675520    1.06%
local             dir     active        98559220         2572428        90937244    2.61%
local-lvm     lvmthin     active       366276608               0       366276608    0.00%
local2            lvm   inactive               0               0               0    0.00%
vm                rbd   inactive               0               0               0    0.00%
root@vis-ivb-07:/var/lib#

Don't like those "Operation not supported" messages, though.
 
Problem solved. As you said, I had to add the storage from a node that has Ceph install. That does not mean picking a node on the left, since the storage is added on the Datacenter level, but rather from a url that corresponds to a node that has Ceph installed.
 
Problem solved. As you said, I had to add the storage from a node that has Ceph install. That does not mean picking a node on the left, since the storage is added on the Datacenter level, but rather from a url that corresponds to a node that has Ceph installed.

Great you solved it.

Ok, then there's still a small bug - as this is on Datacenter level the request will always go to the connected Node, selecting another in the tree normally cannot help here.. Either we see if we can still proxy this to a "ceph node" on addition (not to nice and easy to do cleanly) or at least output a sane error message, so users can at least know what the issue (and thus workaround) is..
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!