Problem with add new osd in ceph

arleinalesso

New Member
Jan 26, 2023
2
0
1
Hi,

I'm having trouble adding a new osd to an existing ceph.

-----------------------------------------------------------------------------------------------------------------

pveversion -v

proxmox-ve: 6.4-1 (running kernel: 5.4.203-1-pve)
pve-manager: 6.4-15 (running version: 6.4-15/af7986e6)
pve-kernel-5.4: 6.4-20
pve-kernel-helper: 6.4-20
pve-kernel-5.4.203-1-pve: 5.4.203-1
pve-kernel-5.4.195-1-pve: 5.4.195-1
pve-kernel-5.4.189-2-pve: 5.4.189-2
pve-kernel-5.4.189-1-pve: 5.4.189-1
pve-kernel-5.4.178-1-pve: 5.4.178-1
pve-kernel-5.4.174-2-pve: 5.4.174-2
pve-kernel-5.4.166-1-pve: 5.4.166-1
pve-kernel-5.4.157-1-pve: 5.4.157-1
pve-kernel-5.4.106-1-pve: 5.4.106-1
ceph: 14.2.22-pve1
ceph-fuse: 14.2.22-pve1
corosync: 3.1.5-pve2~bpo10+1
criu: 3.11-3
glusterfs-client: 5.5-3
ifupdown: residual config
ifupdown2: 3.0.0-1+pve4~bpo10
ksm-control-daemon: 1.3-1
libjs-extjs: 6.0.1-10
libknet1: 1.22-pve2~bpo10+1
libproxmox-acme-perl: 1.1.0
libproxmox-backup-qemu0: 1.1.0-1
libpve-access-control: 6.4-3
libpve-apiclient-perl: 3.1-3
libpve-common-perl: 6.4-5
libpve-guest-common-perl: 3.1-5
libpve-http-server-perl: 3.2-5
libpve-storage-perl: 6.4-1
libqb0: 1.0.5-1
libspice-server1: 0.14.2-4~pve6+1
lvm2: 2.03.02-pve4
lxc-pve: 4.0.6-2
lxcfs: 4.0.6-pve1
novnc-pve: 1.1.0-1
proxmox-backup-client: 1.1.14-1
proxmox-mini-journalreader: 1.1-1
proxmox-widget-toolkit: 2.6-2
pve-cluster: 6.4-1
pve-container: 3.3-6
pve-docs: 6.4-2
pve-edk2-firmware: 2.20200531-1
pve-firewall: 4.1-4
pve-firmware: 3.3-2
pve-ha-manager: 3.1-1
pve-i18n: 2.3-1
pve-qemu-kvm: 5.2.0-8
pve-xtermjs: 4.7.0-3
qemu-server: 6.4-2
smartmontools: 7.2-pve2
spiceterm: 3.1-1
vncterm: 1.6-2
zfsutils-linux: 2.0.7-pve1

-----------------------------------------------------------------------------------------------------------------

### The problem happens with disk /dev/sdf - OSD.17



root@pve04:~# ceph-volume lvm list


====== osd.15 ======

[block] /dev/ceph-58504d42-b7ef-4027-888a-78fc89c35c98/osd-block-ce7da8a3-7d42-43c3-9ce8-112e22533edb

block device /dev/ceph-58504d42-b7ef-4027-888a-78fc89c35c98/osd-block-ce7da8a3-7d42-43c3-9ce8-112e22533edb
block uuid QzT1AR-Z7M2-cBey-qMqO-vudd-1WN8-fRHp2D
cephx lockbox secret
cluster fsid b69edc1d-e8cc-4632-8569-6b24a5a7187f
cluster name ceph
crush device class None
encrypted 0
osd fsid ce7da8a3-7d42-43c3-9ce8-112e22533edb
osd id 15
osdspec affinity
type block
vdo 0
devices /dev/sdd

====== osd.16 ======

[block] /dev/ceph-fba87e86-0740-4054-9fb1-6ca4aa544907/osd-block-693bdba9-303e-4e0f-93cc-8fe1adf9a034

block device /dev/ceph-fba87e86-0740-4054-9fb1-6ca4aa544907/osd-block-693bdba9-303e-4e0f-93cc-8fe1adf9a034
block uuid flmXtO-qzUH-fpej-bO5h-QTRc-2vnk-pU8N73
cephx lockbox secret
cluster fsid b69edc1d-e8cc-4632-8569-6b24a5a7187f
cluster name ceph
crush device class None
encrypted 0
osd fsid 693bdba9-303e-4e0f-93cc-8fe1adf9a034
osd id 16
osdspec affinity
type block
vdo 0
devices /dev/sde

====== osd.17 ======

[block] /dev/ceph-99e24735-9a8c-44ef-bbc7-4da4830b54c9/osd-block-10b715cb-2232-4a09-a362-2a4a61e41894

block device /dev/ceph-99e24735-9a8c-44ef-bbc7-4da4830b54c9/osd-block-10b715cb-2232-4a09-a362-2a4a61e41894
block uuid HRoyNl-uM4T-376I-LezC-P70v-O2B5-O5de0Q
cephx lockbox secret
cluster fsid b69edc1d-e8cc-4632-8569-6b24a5a7187f
cluster name ceph
crush device class None
encrypted 0
osd fsid 10b715cb-2232-4a09-a362-2a4a61e41894
osd id 17
osdspec affinity
type block
vdo 0
devices /dev/sdf

====== osd.7 =======

[block] /dev/ceph-3114ce25-e86c-4d05-843f-be4515f73c91/osd-block-b3be2a23-3a6e-44b2-aaf6-827410177c3b

block device /dev/ceph-3114ce25-e86c-4d05-843f-be4515f73c91/osd-block-b3be2a23-3a6e-44b2-aaf6-827410177c3b
block uuid w6QvId-vySY-1NM0-tFyx-FhcQ-mGZV-2fFLiw
cephx lockbox secret
cluster fsid b69edc1d-e8cc-4632-8569-6b24a5a7187f
cluster name ceph
crush device class None
encrypted 0
osd fsid b3be2a23-3a6e-44b2-aaf6-827410177c3b
osd id 7
osdspec affinity
type block
vdo 0
devices /dev/sdb

====== osd.8 =======

[block] /dev/ceph-01fc6ccb-6f63-434c-86ba-dd426a76fd7a/osd-block-9121ab3d-465e-4c10-aa42-db3cf24659f3

block device /dev/ceph-01fc6ccb-6f63-434c-86ba-dd426a76fd7a/osd-block-9121ab3d-465e-4c10-aa42-db3cf24659f3
block uuid QqJG73-JqLd-u2D2-Amwv-pSmq-sLhM-2JSOeH
cephx lockbox secret
cluster fsid b69edc1d-e8cc-4632-8569-6b24a5a7187f
cluster name ceph
crush device class None
encrypted 0
osd fsid 9121ab3d-465e-4c10-aa42-db3cf24659f3
osd id 8
osdspec affinity
type block
vdo 0
devices /dev/sdc


-----------------------------------------------------------------------------------------------------------------

root@pve04:~# pveceph osd create /dev/sdf
device '/dev/sdf' is already in use

-----------------------------------------------------------------------------------------------------------------

root@pve04:~# ceph osd rm 17
osd.17 does not exist.

-----------------------------------------------------------------------------------------------------------------

Thanks guys for your help
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!