I have created a new 3 Node Ceph cluster. Each node has 4 HDDs for OSDs and on the first 2 nodes I was able to create the managers, monitors and also all of the disks were able to be added as OSDs. However on the 3rd node I get this error when I try and create an OSD on any of the disks:
I was able to create the monitor and manager on this node and Ceph is showing OK as well as the Proxmox cluster. I have tried a restart of this node and still cannot get it to make OSDs. I even did a full rebuild of all three nodes and got to the same problem when adding OSDs on this third node. It is also identical to the second node with all hardware matching so it should not be a hardware issue and on the previous setup i remove the OSDs from the second node. swapped the disks between the second and third node and was able to add the disks that were now in the second node without an issue.
I should also note that on this third node I am able to wipe the disks and also create a ZFS array on them so they are writable and working proberly otherwise from what I have been able to tell. I also tried this command: ceph-volume lvm zap /dev/sdc --destroy on each of the disks in the third node and got the same error of: UnboundLocalError: cannot access local variable 'device_slaves' where it is not associated with a value
create OSD on /dev/sdc (bluestore)
wiping block device /dev/sdc
200+0 records in
200+0 records out
209715200 bytes (210 MB, 200 MiB) copied, 1.71543 s, 122 MB/s
--> UnboundLocalError: cannot access local variable 'device_slaves' where it is not associated with a value
TASK ERROR: command 'ceph-volume lvm create --cluster-fsid 9d027247-0375-4492-bbe6-fc65fc7097fb --crush-device-class hdd --data /dev/sdc' failed: exit code 1
I was able to create the monitor and manager on this node and Ceph is showing OK as well as the Proxmox cluster. I have tried a restart of this node and still cannot get it to make OSDs. I even did a full rebuild of all three nodes and got to the same problem when adding OSDs on this third node. It is also identical to the second node with all hardware matching so it should not be a hardware issue and on the previous setup i remove the OSDs from the second node. swapped the disks between the second and third node and was able to add the disks that were now in the second node without an issue.
I should also note that on this third node I am able to wipe the disks and also create a ZFS array on them so they are writable and working proberly otherwise from what I have been able to tell. I also tried this command: ceph-volume lvm zap /dev/sdc --destroy on each of the disks in the third node and got the same error of: UnboundLocalError: cannot access local variable 'device_slaves' where it is not associated with a value
Last edited: