Hi everyone,
We have a 3 node cluster, zfs + ceph, 17 OSD, 2 pools.
We installed our cluster with 5.3 Proxmox VE, during the years we’ve updated all nodes to 6 and then to 7; as well as we updated Ceph from Luminous, to Nautilus, to Octopus.
During the migration from Nautilus to Octopus we had a problem with one OSD, so we had to destroyed it to rebuild; previously all the osd was created using ceph-disk command; but at that time, when we were rebuilding the annoying osd, we had only ceph-volume command available, so we used it and reconstructed the osd with an LVM .
Now we have 2 nodes with only OSD from disk end one with 4 disk osd + one LVM osd
How can we solve this problem?
Reinstalling all nodes?
Rebuild one OSD by one, node by node?
Thansks for reply
We have a 3 node cluster, zfs + ceph, 17 OSD, 2 pools.
We installed our cluster with 5.3 Proxmox VE, during the years we’ve updated all nodes to 6 and then to 7; as well as we updated Ceph from Luminous, to Nautilus, to Octopus.
During the migration from Nautilus to Octopus we had a problem with one OSD, so we had to destroyed it to rebuild; previously all the osd was created using ceph-disk command; but at that time, when we were rebuilding the annoying osd, we had only ceph-volume command available, so we used it and reconstructed the osd with an LVM .
Now we have 2 nodes with only OSD from disk end one with 4 disk osd + one LVM osd
How can we solve this problem?
Reinstalling all nodes?
Rebuild one OSD by one, node by node?
Thansks for reply