I've got a small problem - I wanted to swap servers in my Proxmox 7.1 cluster. Removing node pve002 and adding the new pve005 was working fine. Ceph was healthy.
But now I try to shutdown pve004 and set the last nvme to out there, I get 19 PGs in inactive status because the new osd.5 in pve005...
Hi,
I've got problem with my CEPH cluster.
Cluster specification:
4x node
4x mon
4x mgr
37x osd
I was starting from CEPH hammer so I followed tutorials:
https://pve.proxmox.com/wiki/Ceph_Hammer_to_Jewel - without any problems
https://pve.proxmox.com/wiki/Ceph_Jewel_to_Luminous - without any...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.