Lots of questions now that I've got some decent hardware and upgrading to 6.0. Per a discussion in another thread, I would like to move the OS of my Ceph Nodes from a default LVM-based install on a large SSD (like 2 TB) ideally to a RAID 1 ZFS boot disk on much smaller SSDs (256GB). I'm fully expecting that the system will be down while I reinstall the OS on the ZFS drives, but once the system comes back up and I force it back into the cluster, how do I get the OSDs back online? All the LVMs are still there, and the OSDs show in the CRUSH map, but because the OS has been reinstalled, the "new" node doesn't have all the same OSD links and startup files as before.
I'm sure this is similar to a common problem when the OS drive of a Ceph node fails. How do I reinstall and regain use of the OSDs in the system?
In my case, I can install Proxmox on the ZFS RAID in another server, and even copy data from the original system drive to minimize downtime.
I'm sure this is similar to a common problem when the OS drive of a Ceph node fails. How do I reinstall and regain use of the OSDs in the system?
In my case, I can install Proxmox on the ZFS RAID in another server, and even copy data from the original system drive to minimize downtime.