Hi all,
I have a small cluster of 5 PVE nodes with 14 OSDs.
I had added a drive to one of the PVE nodes (non-HBA, so I had to set noout, shut down the system, add the drive to the irritating controller as a RAID0, then reboot). The OS itself sees the drive, and I can run pveceph createosd (which returns all success), but _nothing_ is created. No new osd in /var/lib/ceph/osd/.
I figured it was some oddity of the drive controller, even though three other drives are on it. So I just stood up another node (the fifth), added to the cluster, and attempted to add an osd. It *said* it completed successfully, but again -- no osd, and nothing in /var/lib/ceph/osd on this new host.
I don't see any errors in /var/log/ceph/*.log. I'm at a loss.
What should I be looking for?
Thanks in advance.
I have a small cluster of 5 PVE nodes with 14 OSDs.
I had added a drive to one of the PVE nodes (non-HBA, so I had to set noout, shut down the system, add the drive to the irritating controller as a RAID0, then reboot). The OS itself sees the drive, and I can run pveceph createosd (which returns all success), but _nothing_ is created. No new osd in /var/lib/ceph/osd/.
I figured it was some oddity of the drive controller, even though three other drives are on it. So I just stood up another node (the fifth), added to the cluster, and attempted to add an osd. It *said* it completed successfully, but again -- no osd, and nothing in /var/lib/ceph/osd on this new host.
I don't see any errors in /var/log/ceph/*.log. I'm at a loss.
What should I be looking for?
Thanks in advance.