You need to do the following
ceph osd crush rm osd_num
ceph osd auth del osd_num
ceph osd rm osd_num
ceph osd crush rm nodename
All the steps you have to do after stopping ceph mon and ceph mgr service on nodes to be removed
To avoid rebalancing you may set nout and norebalance flags
Something like this
dir: local
path /var/lib/vz
content snippets,backup,rootdir,vztmpl,images,iso
maxfiles 0
shared 1
is_mountpoint 1
mkdir 0
and /etc/fstab
You have following entry
<ip of nfs>: <shared directory> /var/lib/vz nfs rw,hard 0 0
My usecase is working fine
I have a VM on cluster1, I need to transfer this to cluster 2 as proxmox does not support migration from cluster1 to cluster2.
1) I have added ceph storage of cluster2 as external RBD in cluster 1
2) Create a clone of running VM and chose the destination as External...
But on the auth side I am still getting error but the pvesm error is gone
Name Type Status Total Used Available %
BackupNFS pbs active 4294967296 663574528 3631392768 15.45%
BackupRBD rbd...
No only ssh works fine, rest is giving error
root@inc1pve25:/etc/pve/priv/ceph# ceph -m 172.19.2.24 --user admin -s
[errno 1] error connecting to the cluster
Yes all nodes are reachable no issues. I am able to login to all mon hosts of external ceph cluster from my cluster
In the UI I get following error when i click on content on external Ceph
rbd error: rbd: listing images failed: (2) No such file or directory (500)
I have two proxmox ceph clusters , I have added the ceph block storage of one cluster to another another cluster as follows
rbd: BackupRBD
content images
krbd 1
monhost 172.19.2.24 172.19.2.25 172.19.2.26 172.19.2.27 172.19.2.28 172.19.2.29 172.19.2.30 172.19.2.31...
It is correct only. The memory showing available is actually a reclaimable space by kernel.
Read the article attached for your reference. It is true for Ubuntu/RHEL/CentOS
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.