Ceph OSD creation error

mbeard

New Member
Nov 21, 2023
2
0
1
Setting up ceph on a three node cluster, all three nodes are fresh hardware and installs of PVE. Getting an error on all three nodes when trying to create the OSD either via GUI or CLI.

create OSD on /dev/sdc (bluestore)
wiping block device /dev/sdc
200+0 records in
200+0 records out
209715200 bytes (210 MB, 200 MiB) copied, 0.52456 s, 400 MB/s
--> UnboundLocalError: cannot access local variable 'device_slaves' where it is not associated with a value
TASK ERROR: command 'ceph-volume lvm create --cluster-fsid 4691d91a-6fd9-42b1-bab9-5f9042b21925 --crush-device-class ssd --data /dev/sdc' failed: exit code 1



I can access all four drives per node, however even after multiple installations and fresh starts I'm still stuck trying to create the OSDs. I saw a previous post on 11/13/23 that was marked solved by playing musical drives however all of my nodes will not function. Any help that can be provided would be appreciated. I could not find any other results for proxmox or ceph with this error.



//Ceph.conf


global]
auth_client_required = cephx
auth_cluster_required = cephx
auth_service_required = cephx
cluster_network = 10.10.1.10/24
fsid = 4691d91a-6fd9-42b1-bab9-5f9042b21925
mon_allow_pool_delete = true
mon_host = 10.10.1.10
ms_bind_ipv4 = true
ms_bind_ipv6 = false
osd_pool_default_min_size = 2
osd_pool_default_size = 3
public_network = 10.10.1.10/24

[client]
keyring = /etc/pve/priv/$cluster.$name.keyring

[mon.node1]
public_addr = 10.10.1.10
 
Can you post the output of:

* lsblk
* did you wipe and intialize the disk with gpt before creating the osd?
 
Hi!
I have fresh proxmox|ceph intalation
And same error on all nodes|all osds init.

# ceph-volume lvm create --cluster-fsid c1d17835-20a8-4441-b221-096e9135fca4 --data /dev/sdb2
--> UnboundLocalError: cannot access local variable 'device_slaves' where it is not associated with a value

Looks like python script variables problem...


ceph version 17.2.7 (e303afc2e967a4705b40a7e5f76067c10eea0484) quincy (stable)

ii ceph 17.2.7-pve1 amd64 distributed storage and file system
ii ceph-base 17.2.7-pve1 amd64 common ceph daemon libraries and management tools
ii ceph-common 17.2.7-pve1 amd64 common utilities to mount and interact with a ceph storage cluster
ii ceph-fuse 17.2.7-pve1 amd64 FUSE-based client for the Ceph distributed file system
ii ceph-mds 17.2.7-pve1 amd64 metadata server for the ceph distributed file system
ii ceph-mgr 17.2.7-pve1 amd64 manager for the ceph distributed storage system
ii ceph-mgr-modules-core 17.2.7-pve1 all ceph manager modules which are always enabled
ii ceph-mon 17.2.7-pve1 amd64 monitor server for the ceph storage system
ii ceph-osd 17.2.7-pve1 amd64 OSD server for the ceph storage system
ii ceph-volume 17.2.7-pve1 all tool to facilidate OSD deployment
ii libcephfs2 17.2.7-pve1 amd64 Ceph distributed file system client library
ii libsqlite3-mod-ceph 17.2.7-pve1 amd64 SQLite3 VFS for Ceph
ii python3-ceph-argparse 17.2.7-pve1 all Python 3 utility libraries for Ceph CLI
ii python3-ceph-common 17.2.7-pve1 all Python 3 utility libraries for Ceph
ii python3-cephfs 17.2.7-pve1 amd64 Python 3 libraries for the Ceph libcephfs library

ii pve-cluster 8.0.1 amd64 "pmxcfs" distributed cluster filesystem for Proxmox Virtual Environment.
ii pve-container 5.0.3 all Proxmox VE Container management tool
ii pve-docs 8.0.3 all Proxmox VE Documentation
ii pve-edk2-firmware 3.20230228-4 all edk2 based UEFI firmware modules for virtual machines
ii pve-firewall 5.0.2 amd64 Proxmox VE Firewall
ii pve-firmware 3.7-1 all Binary firmware code for the pve-kernel
ii pve-ha-manager 4.0.2 amd64 Proxmox VE HA Manager
ii pve-i18n 3.0.4 all Internationalization support for Proxmox VE
ii pve-kernel-6.2 8.0.2 all Latest Proxmox VE Kernel Image
ii pve-kernel-6.2.16-3-pve 6.2.16-3 amd64 Proxmox Kernel Image
ii pve-lxc-syscalld 1.3.0 amd64 PVE LXC syscall daemon
ii pve-manager 8.0.3 amd64 Proxmox Virtual Environment Management Tools
ii pve-qemu-kvm 8.0.2-3 amd64 Full virtualization on x86 hardware
ii pve-xtermjs 4.16.0-3 amd64 HTML/JS Shell client for Proxmox projects
 
Last edited:
I think problem here
Maybe python change variables logic...

/usr/lib/python3/dist-packages/ceph_volume/util/disk.py
I have added line device_slaves=False
because line 889 have comparison not initialized variable

1700658297884.png

And now OSD init fine
1700658554616.png
 
I have tried both with a wipe and init and without. no change on either

here is the lsblk output:

LSBLK

sda 8:0 1 223.6G 0 disk
├─sda1 8:1 1 1007K 0 part
├─sda2 8:2 1 1G 0 part
└─sda3 8:3 1 222.6G 0 part
sdb 8:16 1 223.6G 0 disk
├─sdb1 8:17 1 1007K 0 part
├─sdb2 8:18 1 1G 0 part
└─sdb3 8:19 1 222.6G 0 part
sdc 8:32 1 1.7T 0 disk
sdd 8:48 1 1.7T 0 disk
sde 8:64 1 1.7T 0 disk
sdf 8:80 1 1.7T 0 disk
root@node3:~#

sdc-sdf are my four osd drives, connected direct to the motherboard mini-sas HD port on a ROMED6U-2L2T board. no hardware raid infront of it.


Interestingly enough last night i threw a random nvme on the board, did nothing to it at all and magically it allowed me to create osd's, i did a fresh install again to see if I could troublehsoot the problem down further, and I agree with Deepdivenow its related to that section of code and calling for device_slaves before its defined.

Which brings up an interesting question on why adding a nvme drive changed anything?


editing the .py file and adding device_slaves=False also worked for me.
 
Last edited:
I just had this issue as well for the first time (Already installed dozens of hosts).

This patch did the trick as well for me !
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!