[SOLVED] Errors while creating osd, "RuntimeError: Unable to create a new OSD id"

semper22

New Member
Sep 19, 2019
19
0
1
28
I've been trying to install ProxmoxVE 6, and while creating an OSD on the second node (the first worked very fine) I get this message while trying to create an OSD with command line "ceph-volume lvm create --cluster-fsid afd4e983-34bc-43ff-a118-ba7e1e947303 --data /dev/sda"
Code:
Running command: /bin/ceph-authtool --gen-print-key
Running command: /bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new f35d95e5-37e0-48ba-a02b-ad29e2b8191e
 stderr: [errno 1] error connecting to the cluster
-->  RuntimeError: Unable to create a new OSD id

This command comes from the GUI when creating an OSD it says:
Code:
TASK FAILED
command 'ceph-volume lvm create --cluster-fsid afd4e983-34bc-43ff-a118-ba7e1e947303 --data /dev/sda' failed: exit code 1

Where is the problem ? I have no other osd than the 6 on the first node, I had some others but I purged pveceph so they should not interfere I suppose...

Thanks,
Semper
 
Yes my cluster is set up already, and here is my /etc/pve/ceph.conf :
Code:
[global]
         auth_client_required = cephx
         auth_cluster_required = cephx
         auth_service_required = cephx
         cluster_network = 10.0.0.0/24
         fsid = 388086c6-a8ac-422b-8429-ec006e10873d
         mon_allow_pool_delete = true
         mon_host = 10.0.0.2 10.0.0.3 10.0.0.4
         osd_pool_default_min_size = 2
         osd_pool_default_size = 3
         public_network = 10.0.0.0/24

[client]
         keyring = /etc/pve/priv/$cluster.$name.keyring
and my /etc/pve/corosync.conf :
Code:
logging {
  debug: off
  to_syslog: yes
}

nodelist {
  node {
    name: pve01
    nodeid: 1
    quorum_votes: 1
    ring0_addr: 192.168.10.60
  }
  node {
    name: pve02
    nodeid: 2
    quorum_votes: 1
    ring0_addr: 192.168.10.61
  }
  node {
    name: pve03
    nodeid: 3
    quorum_votes: 1
    ring0_addr: 192.168.10.62
  }
}

quorum {
  provider: corosync_votequorum
}

totem {
  cluster_name: clusterLumos
  config_version: 3
  interface {
    linknumber: 0
  }
  ip_version: ipv4-6
  secauth: on
  version: 2
}
 
It also seems that Proxmox is starting OSD's that doesn't exist you can see it on this picture :OSD error loading.jpg
I don't see any OSDs in 'ceph osd tree' so ...
 
I managed to find an answer to my problem, after making a "pveceph purge" I reinstalled Ceph on each node, then I created the monitors.
After this I fast formated the disks that I wanted to use as OSDs, and used this tutorial to create an OSD manually (https://docs.ceph.com/docs/master/install/manual-deployment/#long-form) using the "Long Form" tutorial. After creating one OSD with this method you might notice that the OSD size is really small compared to your disk real size, you can then "umount" that disk take the OSD out and delete his folder from "/var/lib/ceph/osd/ceph-$ID" (try using that guide, http://fibrevillage.com/storage/235-add-remove-an-osd-to-ceph-cluster-manual) after this you should be able to create an OSD via the GUI without any errors, if not try to format the disk again and try another time !

PS: If the command "uuidgen" does not work install "uuid-runtime" : apt-get install uuid-runtime

I hope this will help !
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!