Errors after adding External RBD

ermanishchawla

Well-Known Member
Mar 23, 2020
332
37
48
38
I have two proxmox ceph clusters , I have added the ceph block storage of one cluster to another another cluster as follows

Code:
rbd: BackupRBD



    content images



    krbd 1



    monhost 172.19.2.24 172.19.2.25 172.19.2.26 172.19.2.27 172.19.2.28 172.19.2.29 172.19.2.30 172.19.2.31 172.19.2.46 172.19.2.47 172.19.2.44 172.19.2.45



    pool vm



    username admin

and also added client keyring from external ceph to /etc/pve/priv/ceph folder of cluster

After doing the changes I am getting following error


Code:
Use of uninitialized value $free in addition (+) at /usr/share/perl5/PVE/Storage/RBDPlugin.pm line 557.





Use of uninitialized value $used in addition (+) at /usr/share/perl5/PVE/Storage/RBDPlugin.pm line 557.





Use of uninitialized value $avail in int at /usr/share/perl5/PVE/Storage.pm line 1147.





Use of uninitialized value $used in int at /usr/share/perl5/PVE/Storage.pm line 1148.





Name             Type     Status           Total            Used       Available        %





BackupNFS         pbs     active      4294967296       663573504      3631393792   15.45%





BackupRBD         rbd     active               0               0               0    0.00%





local             dir     active      4294967296       663573504      3631393792   15.45%





local-lvm     lvmthin     active      1714909184               0      1714909184    0.00%





vm                rbd     active     26362720257      8652394497     17710325760   32.82%


What could be the problem
 
After doing the changes I am getting following error
And what where those changes? What command produces the uninitialized values message?

EDIT: on what pveversion -v are you?
 
And what where those changes? What command produces the uninitialized values message?

EDIT: on what pveversion -v are you?
root@inc1pve25:/etc/pve/priv/ceph# pveversion -v


perl: warning: Setting locale failed.


perl: warning: Please check that your locale settings:


LANGUAGE = (unset),


LC_ALL = (unset),


LC_CTYPE = "UTF-8",


LANG = "en_US.UTF-8"


are supported and installed on your system.


perl: warning: Falling back to a fallback locale ("en_US.UTF-8").


proxmox-ve: 6.2-1 (running kernel: 5.4.44-2-pve)


pve-manager: 6.2-10 (running version: 6.2-10/a20769ed)


pve-kernel-5.4: 6.2-4


pve-kernel-helper: 6.2-4


pve-kernel-5.0: 6.0-11


pve-kernel-5.4.44-2-pve: 5.4.44-2


pve-kernel-5.4.41-1-pve: 5.4.41-1


pve-kernel-5.0.21-5-pve: 5.0.21-10


pve-kernel-5.0.15-1-pve: 5.0.15-1


ceph: 14.2.10-pve1


ceph-fuse: 14.2.10-pve1


corosync: 3.0.4-pve1


criu: 3.11-3


glusterfs-client: 5.5-3


ifupdown: residual config


ifupdown2: 3.0.0-1+pve2


ksm-control-daemon: 1.3-1


libjs-extjs: 6.0.1-10


libknet1: 1.16-pve1


libproxmox-acme-perl: 1.0.4


libpve-access-control: 6.1-2


libpve-apiclient-perl: 3.0-3


libpve-common-perl: 6.1-5


libpve-guest-common-perl: 3.1-1


libpve-http-server-perl: 3.0-6


libpve-network-perl: 0.4-6


libpve-storage-perl: 6.2-5


libqb0: 1.0.5-1


libspice-server1: 0.14.2-4~pve6+1


lvm2: 2.03.02-pve4


lxc-pve: 4.0.2-1


lxcfs: 4.0.3-pve3


novnc-pve: 1.1.0-1


proxmox-mini-journalreader: 1.1-1


proxmox-widget-toolkit: 2.2-9


pve-cluster: 6.1-8


pve-container: 3.1-12


pve-docs: 6.2-5


pve-edk2-firmware: 2.20200531-1


pve-firewall: 4.1-2


pve-firmware: 3.1-1


pve-ha-manager: 3.0-9


pve-i18n: 2.1-3


pve-qemu-kvm: 5.0.0-11


pve-xtermjs: 4.3.0-1


qemu-server: 6.2-11


smartmontools: 7.1-pve2


spiceterm: 3.1-1


vncterm: 1.6-1


zfsutils-linux: 0.8.4-pve1
 
I meant adding the external Ceph == This is the change
Ok. Can all nodes in the PVE cluster reach the public_network of Ceph? And what version is the external Ceph cluster?
 
Ok. Can all nodes in the PVE cluster reach the public_network of Ceph? And what version is the external Ceph cluster?

Yes all nodes are reachable no issues. I am able to login to all mon hosts of external ceph cluster from my cluster
In the UI I get following error when i click on content on external Ceph

rbd error: rbd: listing images failed: (2) No such file or directory (500)
 
So a connect with ceph -m <mon-ip> --user <username> -s works from the PVE nodes?
 
No only ssh works fine, rest is giving error

root@inc1pve25:/etc/pve/priv/ceph# ceph -m 172.19.2.24 --user admin -s


[errno 1] error connecting to the cluster
 
But on the auth side I am still getting error but the pvesm error is gone


Code:
Name             Type     Status           Total            Used       Available        %





BackupNFS         pbs     active      4294967296       663574528      3631392768   15.45%





BackupRBD         rbd     active     28190790272      1990638208     26200152064    7.06%





local             dir     active      4294967296       663574528      3631392768   15.45%





local-lvm     lvmthin     active      1714909184               0      1714909184    0.00%





vm                rbd     active     26362653697      8652395521     17710258176   32.82%
 
My usecase is working fine

I have a VM on cluster1, I need to transfer this to cluster 2 as proxmox does not support migration from cluster1 to cluster2.

1) I have added ceph storage of cluster2 as external RBD in cluster 1
2) Create a clone of running VM and chose the destination as External RBD
3) Once clone is completed scp /etc/pve/qemu-server/<vmid>.conf to cluster2 ( just a small config file)
4) Edited the config file in cluster2 and just renamed the name of Storage to reflect the pool name used in Cluster2
5) I have deleted the config file from cluster 1
6) Now just started the VM in Cluster 2

Yes it is running and everything is normal

ceph auth -m <monip of cluster2> -s is still giving error but not a concern because as I read from the documentation, My ceph auth ls is only for the pool and not for status so it is understandable