rbd utility

nada

Member
Dec 2, 2019
5
0
6
64
hi, we have external CEPH cluster
and it is currently linked to PROXMOX cluster via RBD and CEPHFS
everything works well :-)
Code:
# pvesm status
Name                  Type     Status           Total            Used       Available        %
backup2                dir     active       503836032       189078528       314757504   37.53%
cephfs              cephfs     active    111043796992      1254940672    109788856320    1.13%
local                  dir     active       302694912         2585216       300109696    0.85%
rbd                    rbd     active    109830088547        41232227    109788856320    0.04%
rbd_ssd                rbd     active     19555278042        75576538     19479701504    0.39%
san2020janpool     lvmthin     active       283115520        68400709       214714810   24.16%
zfs                zfspool     active       202145792         8562260       193583532    4.24%
We were testing move of the rootfs disk to rbd at CT and QM . It is OK.
But when I want to see list of RBD images or info from Proxmox node it failed.
So I created symlinks and now it works and i may use rbd utility in scripts 8-)
PLS is it correct ? or should I do it different way ?
Nada

Code:
root@mox5:~# pvesm status
Name                  Type     Status           Total            Used       Available        %
backup2                dir     active       503836032       189078528       314757504   37.53%
cephfs              cephfs     active    111043796992      1254940672    109788856320    1.13%
local                  dir     active       302694912         2585216       300109696    0.85%
rbd                    rbd     active    109830088547        41232227    109788856320    0.04%
rbd_ssd                rbd     active     19555278042        75576538     19479701504    0.39%
san2020janpool     lvmthin     active       283115520        68400709       214714810   24.16%
zfs                zfspool     active       202145792         8562260       193583532    4.24%

root@mox5:~# ls -la /etc/ceph/
total 11
drwxr-xr-x  2 root root   5 Aug  4 13:02 .
drwxr-xr-x 97 root root 192 Jul 22 19:06 ..
lrwxrwxrwx  1 root root  27 Aug  4 13:02 rbd.conf -> /etc/pve/priv/ceph/rbd.conf
lrwxrwxrwx  1 root root  30 Aug  4 13:01 rbd.keyring -> /etc/pve/priv/ceph/rbd.keyring
-rw-r--r--  1 root root  92 Aug 28  2019 rbdmap

root@mox5:~# rbd -c /etc/pve/priv/ceph/rbd.conf ls
hdd_rbd_4tb
vm-111-disk-0
vm-204-disk-0

root@mox5:~# rbd -c /etc/pve/priv/ceph/rbd.conf info vm-204-disk-0
rbd image 'vm-204-disk-0':
    size 15 GiB in 3840 objects
    order 22 (4 MiB objects)
    snapshot_count: 1
    id: 3eceabaf68a3e
    block_name_prefix: rbd_data.3eceabaf68a3e
    format: 2
    features: layering, exclusive-lock, object-map, fast-diff, deep-flatten
    op_features: 
    flags: 
    create_timestamp: Fri Jul 30 14:13:52 2021
    access_timestamp: Fri Jul 30 14:13:52 2021
    modify_timestamp: Fri Jul 30 14:13:52 2021
 
You could also call it with all the parameters manually. That is how Proxmox VE is calling it when an external Ceph cluster is used:

Code:
rbd -p <POOL> -m <MON1>,<MON2> -n client.admin --keyring /etc/pve/priv/ceph/<STORAGE>.keyring ls

A few warnings about missing config files are to be expected.
 
Thank you Aaron, so I may create symlinks at all proxmox nodes where RBD is shared.