Hi, I had a 3-node cluster with a working ceph installation and some VMs running.
Today, I had to add a 4th and 5th nodes to the cluster.
After installing Proxmox onto the 4th one and joining it to the cluster, I tried installing ceph in said node. The installation failed (or so I assume, as out of nowhere I got a timeout (500) error), so I executed these commands on the new node:
I now realized this had erased the config on all 4 nodes. In the GUI the Ceph no longer responds ("rados_connect failed - No such file or directory (500)").
However, the VMs still work perfectly as it appears.
Is there anything I can do to recover the Ceph so that the OSDs and de VM disks do not dissapear?
Thanks for reading!
Today, I had to add a 4th and 5th nodes to the cluster.
After installing Proxmox onto the 4th one and joining it to the cluster, I tried installing ceph in said node. The installation failed (or so I assume, as out of nowhere I got a timeout (500) error), so I executed these commands on the new node:
systemctl stop ceph-*.targetapt purge ceph-mon ceph-osd ceph-mgr ceph-mds ceph-base ceph-mgr-modules-corerm -rf /etc/ceph/*rm -rf /var/lib/ceph/rm -rf /etc/pve/ceph.confrm -rf /etc/pve/priv/ceph.*rm -rf /etc/systemd/system/ceph*I now realized this had erased the config on all 4 nodes. In the GUI the Ceph no longer responds ("rados_connect failed - No such file or directory (500)").
However, the VMs still work perfectly as it appears.
Is there anything I can do to recover the Ceph so that the OSDs and de VM disks do not dissapear?
Thanks for reading!