I have just installed Proxmox 6.0 beta on a 3-node cluster.
I have setup the cluster, and also setup Ceph Managers/Monitors on each node.
I’m now at the stage to create OSDs - I’m using Intel Optane drives, which benefit from multiple OSDs per drive. However, when I try to run the command to create them on the first node, I get an error:
Any ideas what’s going on?
(I did try to enable the Ceph dashboard before, but then had to disable it, as it was looking for some routes package).
I have setup the cluster, and also setup Ceph Managers/Monitors on each node.
I’m now at the stage to create OSDs - I’m using Intel Optane drives, which benefit from multiple OSDs per drive. However, when I try to run the command to create them on the first node, I get an error:
Code:
root@vwnode1:~# ceph-volume lvm batch --osds-per-device 4 /dev/nvme0n1
Total OSDs: 4
Type Path LV Size % of device
----------------------------------------------------------------------------------------------------
[data] /dev/nvme0n1 111.50 GB 25%
----------------------------------------------------------------------------------------------------
[data] /dev/nvme0n1 111.50 GB 25%
----------------------------------------------------------------------------------------------------
[data] /dev/nvme0n1 111.50 GB 25%
----------------------------------------------------------------------------------------------------
[data] /dev/nvme0n1 111.50 GB 25%
--> The above OSDs would be created if the operation continues
--> do you want to proceed? (yes/no) yes
Running command: /usr/sbin/vgcreate -s 1G --force --yes ceph-7d521072-a146-4dcd-ba30-86b44fc1b0a6 /dev/nvme0n1
stdout: Physical volume "/dev/nvme0n1" successfully created.
stdout: Volume group "ceph-7d521072-a146-4dcd-ba30-86b44fc1b0a6" successfully created
Running command: /usr/sbin/lvcreate --yes -l 111 -n osd-data-ac34507a-d697-4366-a4ab-4bb4dcf96a5e ceph-7d521072-a146-4dcd-ba30-86b44fc1b0a6
stdout: Logical volume "osd-data-ac34507a-d697-4366-a4ab-4bb4dcf96a5e" created.
Running command: /usr/sbin/lvcreate --yes -l 111 -n osd-data-c735f58d-ab6f-40dd-bd13-0edd5d423803 ceph-7d521072-a146-4dcd-ba30-86b44fc1b0a6
stdout: Logical volume "osd-data-c735f58d-ab6f-40dd-bd13-0edd5d423803" created.
Running command: /usr/sbin/lvcreate --yes -l 111 -n osd-data-40fb4974-d072-447d-b123-0ce2f4f69af6 ceph-7d521072-a146-4dcd-ba30-86b44fc1b0a6
stdout: Logical volume "osd-data-40fb4974-d072-447d-b123-0ce2f4f69af6" created.
Running command: /usr/sbin/lvcreate --yes -l 111 -n osd-data-a00a4dff-b131-4e45-abcb-68e54b98c196 ceph-7d521072-a146-4dcd-ba30-86b44fc1b0a6
stdout: Logical volume "osd-data-a00a4dff-b131-4e45-abcb-68e54b98c196" created.
Running command: /bin/ceph-authtool --gen-print-key
Running command: /bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new ebe32748-be47-41d5-88fb-d500852c297d
stderr: [errno 2] error connecting to the cluster
--> RuntimeError: Unable to create a new OSD id
(I did try to enable the Ceph dashboard before, but then had to disable it, as it was looking for some routes package).