how many OSD per node?
From your screenshot i saw 5 were out and left 7 in only? if total is 12 ... it should be 4 OSD per node but why the uneven?
Connection timed out , but i see ceph is rebuilding the datastorewhat is the error when you are trying to access your VM?
why did you remove OSD from working node and no backup ... i am sorry you might experience data loss ...I removed an entire node with 4 OSDs, and then I removed just one from another node, I thought that with 5 OSDs out everything would work
I tried to contact a company in Italy, but they told me that they can't give me assistance if I don't buy their products. If there's someone who can do it in Italian, I'm open to it.Contact support, or hire someone who knows what he is doing.
for the record, ceph has finished rebuilding, now I can access the VMs, I'm backing up all the VMs to then restore them on the new node with ZFS, I have 2 VMs left in backup state but in reality that's not true, how do I stop them and put them in active state?