new pve 5.3 3-node cluster install with ceph, minor issue

chalex

Active Member
Mar 19, 2018
22
1
43
41
Hey all,

I did a total fresh install of a test cluster on three machines. All the ceph stuff seems to work just fine. I created the pool and added it to pve with

pvesm add rbd firstpool

It shows up in the GUI, but if I look at the contents tab, I get an error
rbd error: rbd: list: (2) No such file or directory (500)

If I try to create a new VM with a disk in that pool, I get the error:
TASK ERROR: unable to create VM 100 - error with cfs lock 'storage-firstpool': rbd error: rbd: list: (2) No such file or directory

On the CLI, things look OK:

root@pve-c3:~# ceph -s
cluster:
id: 0715a097-ae2b-4000-80ba-46aa7462989a
health: HEALTH_OK

services:
mon: 3 daemons, quorum pve-c1,pve-c2,pve-c3
mgr: pve-c1(active), standbys: pve-c2, pve-c3
osd: 19 osds: 19 up, 19 in

data:
pools: 1 pools, 1024 pgs
objects: 0 objects, 0B
usage: 19.2GiB used, 34.5TiB / 34.6TiB avail
pgs: 1024 active+clean


root@pve-c3:~# rados lspools
firstpool


What did I miss? Must be something simple.


root@pve-c3:~# pvesm status
Use of uninitialized value in string eq at /usr/share/perl5/PVE/Storage/RBDPlugin.pm line 519.
Use of uninitialized value $free in addition (+) at /usr/share/perl5/PVE/Storage/RBDPlugin.pm line 525.
Use of uninitialized value $used in addition (+) at /usr/share/perl5/PVE/Storage/RBDPlugin.pm line 525.
Use of uninitialized value $avail in int at /usr/share/perl5/PVE/Storage.pm line 1113.
Use of uninitialized value $used in int at /usr/share/perl5/PVE/Storage.pm line 1114.
Name Type Status Total Used Available %
firstpool rbd active 0 0 0 0.00%
local dir active 18963120 2040708 15936096 10.76%
local-lvm lvmthin active 38039552 0 38039552 0.00%
root@pve-c3:~#



root@pve-c3:~# rados df
POOL_NAME USED OBJECTS CLONES COPIES MISSING_ON_PRIMARY UNFOUND DEGRADED RD_OPS RD WR_OPS WR
firstpool 0B 0 0 0 0 0 0 0 0B 0 0B

total_objects 0
total_used 19.2GiB
total_avail 34.5TiB
total_space 34.6TiB
 
maybe the GUI doesn't use the correct pool name?


root@pve-c3:~# rbd ls
rbd: error opening default pool 'rbd'
Ensure that the default pool has been created or specify an alternate pool name.
rbd: list: (2) No such file or directory

root@pve-c3:~# rbd ls firstpool

root@pve-c3:~#
 
Hi Alwin,

Thanks for your response. I read that section but it seems to me that I only need to do something if I have external Ceph. But I have the pve hyper-converged ceph.

Here were the commands in my history:

56 pveceph createpool firstpool -pg_num 1024
61 pvesm status
62 pvesm help
63 pvesm list
64 pvesm help |less
65 pvesm help |less
66 pvesm add rbd firstpool
68 pvesm status
69 pveceph status

Here is what I get in the config file:



root@pve-c1:~# cat /etc/pve/storage.cfg
dir: local
path /var/lib/vz
content backup,iso,vztmpl

lvmthin: local-lvm
thinpool data
vgname pve
content rootdir,images

rbd: firstpool

root@pve-c1:~#

Is something missing from that storage config?
 
OK, I figured it out, I needed to specify more info for pvesm:

root@pve-c1:~# pvesm add rbd hyperconverged --pool firstpool

And now my storage.cfg section looks like this:


rbd: hyperconverged
pool firstpool
 
  • Like
Reactions: Alwin
root@pve-c1:~# pvesm add rbd hyperconverged --pool firstpool
Glad it worked. When no pool is specified then it assumes the default pool name 'rbd'.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!