I've got a small problem - I wanted to swap servers in my Proxmox 7.1 cluster. Removing node pve002 and adding the new pve005 was working fine. Ceph was healthy.
But now I try to shutdown pve004 and set the last nvme to out there, I get 19 PGs in inactive status because the new osd.5 in pve005...
I've got problem with my CEPH cluster.
I was starting from CEPH hammer so I followed tutorials:
https://pve.proxmox.com/wiki/Ceph_Hammer_to_Jewel - without any problems
https://pve.proxmox.com/wiki/Ceph_Jewel_to_Luminous - without any...