Add new OSD to existing CEPH POOL

np-prxmx

Member
May 11, 2020
44
3
8
53
Hi all,
i've 4 nodes proxmox, with CEPH, only three are monitors.
For every node, i've three 3 SSD and 2 HDD (normal) , and two different pools: one for ssd and one for hdd. Now, i'm adding one OSD per node, to add this to existing pool HDD. But it's taking more time that i thinked. This is the log, thath i don't understand that is already ready or it's working

create OSD on /dev/sdg (bluestore)
wiping block device /dev/sdg
200+0 records in
200+0 records out
209715200 bytes (210 MB, 200 MiB) copied, 0.901488 s, 233 MB/s
Running command: /bin/ceph-authtool --gen-print-key
Running command: /bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new 16fbdc99-cf4b-4c80-b330-1117b0177ade
Running command: /sbin/vgcreate --force --yes ceph-06e4c9c4-2b03-4dfc-844f-1a272c8dcab2 /dev/sdg
stdout: Physical volume "/dev/sdg" successfully created.
stdout: Volume group "ceph-06e4c9c4-2b03-4dfc-844f-1a272c8dcab2" successfully created
Running command: /sbin/lvcreate --yes -l 572317 -n osd-block-16fbdc99-cf4b-4c80-b330-1117b0177ade ceph-06e4c9c4-2b03-4dfc-844f-1a272c8dcab2
stdout: Logical volume "osd-block-16fbdc99-cf4b-4c80-b330-1117b0177ade" created.
Running command: /bin/ceph-authtool --gen-print-key
Running command: /bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-20
--> Executable selinuxenabled not in PATH: /sbin:/bin:/usr/sbin:/usr/bin
Running command: /bin/chown -h ceph:ceph /dev/ceph-06e4c9c4-2b03-4dfc-844f-1a272c8dcab2/osd-block-16fbdc99-cf4b-4c80-b330-1117b0177ade
Running command: /bin/chown -R ceph:ceph /dev/dm-10
Running command: /bin/ln -s /dev/ceph-06e4c9c4-2b03-4dfc-844f-1a272c8dcab2/osd-block-16fbdc99-cf4b-4c80-b330-1117b0177ade /var/lib/ceph/osd/ceph-20/block
Running command: /bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-20/activate.monmap
stderr: 2022-01-24T16:52:23.535+0100 7f511a933700 -1 auth: unable to find a keyring on /etc/pve/priv/ceph.client.bootstrap-osd.keyring: (2) No such file or directory
2022-01-24T16:52:23.535+0100 7f511a933700 -1 AuthRegistry(0x7f511405b2e8) no keyring found at /etc/pve/priv/ceph.client.bootstrap-osd.keyring, disabling cephx
stderr: got monmap epoch 5
Running command: /bin/ceph-authtool /var/lib/ceph/osd/ceph-20/keyring --create-keyring --name osd.20 --add-key AQA1y+5hSFrGOBAAb83LeZcEjVUwy3+4XOeCxw==
stdout: creating /var/lib/ceph/osd/ceph-20/keyring
added entity osd.20 auth(key=AQA1y+5hSFrGOBAAb83LeZcEjVUwy3+4XOeCxw==)
Running command: /bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-20/keyring
Running command: /bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-20/
Running command: /bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 20 --monmap /var/lib/ceph/osd/ceph-20/activate.monmap --keyfile - --osd-data /var/lib/ceph/osd/ceph-20/ --osd-uuid 16fbdc99-cf4b-4c80-b330-1117b0177ade --setuser ceph --setgroup ceph
stderr: 2022-01-24T16:52:23.887+0100 7f50ff769f00 -1 bluestore(/var/lib/ceph/osd/ceph-20/) _read_fsid unparsable uuid

--> ceph-volume lvm prepare successful for: /dev/sdg



And if i check "ceph osd tree" i see these osd in "out".

Thanks
 

mira

Proxmox Staff Member
Staff member
Aug 1, 2018
1,631
156
68
Please provide the output of the commands pveversion -v, ceph osd tree and ceph -s.
 

np-prxmx

Member
May 11, 2020
44
3
8
53
ok.
pveversion -v
proxmox-ve: 7.1-1 (running kernel: 5.13.19-2-pve)
pve-manager: 7.1-8 (running version: 7.1-8/5b267f33)
pve-kernel-helper: 7.1-6
pve-kernel-5.13: 7.1-5
pve-kernel-5.13.19-2-pve: 5.13.19-4
pve-kernel-5.13.19-1-pve: 5.13.19-3
ceph: 16.2.7
ceph-fuse: 16.2.7
corosync: 3.1.5-pve2
criu: 3.15-1+pve-1
glusterfs-client: 9.2-1
ifupdown2: 3.1.0-1+pmx3
ksm-control-daemon: 1.4-1
libjs-extjs: 7.0.0-1
libknet1: 1.22-pve2
libproxmox-acme-perl: 1.4.0
libproxmox-backup-qemu0: 1.2.0-1
libpve-access-control: 7.1-5
libpve-apiclient-perl: 3.2-1
libpve-common-perl: 7.0-14
libpve-guest-common-perl: 4.0-3
libpve-http-server-perl: 4.0-4
libpve-storage-perl: 7.0-15
libspice-server1: 0.14.3-2.1
lvm2: 2.03.11-2.1
lxc-pve: 4.0.11-1
lxcfs: 4.0.11-pve1
novnc-pve: 1.2.0-3
proxmox-backup-client: 2.1.2-1
proxmox-backup-file-restore: 2.1.2-1
proxmox-mini-journalreader: 1.3-1
proxmox-widget-toolkit: 3.4-4
pve-cluster: 7.1-2
pve-container: 4.1-3
pve-docs: 7.1-2
pve-edk2-firmware: 3.20210831-2
pve-firewall: 4.2-5
pve-firmware: 3.3-3
pve-ha-manager: 3.3-1
pve-i18n: 2.6-2
pve-qemu-kvm: 6.1.0-3
pve-xtermjs: 4.12.0-1
qemu-server: 7.1-4
smartmontools: 7.2-1
spiceterm: 3.2-2
swtpm: 0.7.0~rc1+2
vncterm: 1.7-1
zfsutils-linux: 2.1.1-pve3
#################################


ceph -s
cluster:
id: c3ac1aa6-58e8-44f4-8c18-248bb49d065f
health: HEALTH_WARN
3 nearfull osd(s)
Low space hindering backfill (add storage if this doesn't resolve itself): 2 pgs backfill_toofull
2 pool(s) nearfull

services:
mon: 3 daemons, quorum PX01,PX02,PX03 (age 6d)
mgr: PX04(active, since 6d), standbys: PX02, PX03, PX01
osd: 22 osds: 20 up (since 4h), 20 in (since 4h); 10 remapped pgs

data:
pools: 3 pools, 768 pgs
objects: 1.78M objects, 6.8 TiB
usage: 27 TiB used, 12 TiB / 38 TiB avail
pgs: 54190/7128792 objects misplaced (0.760%)
758 active+clean
7 active+remapped+backfill_wait
2 active+remapped+backfill_wait+backfill_toofull
1 active+remapped+backfilling

io:
client: 6.5 MiB/s rd, 7.3 MiB/s wr, 314 op/s rd, 267 op/s wr
recovery: 19 MiB/s, 4 objects/s

#######################################################

ceph osd tree
ID CLASS WEIGHT TYPE NAME STATUS REWEIGHT PRI-AFF
-1 40.60794 root default
-3 9.60619 host PX01
3 hdd 2.18320 osd.3 up 1.00000 1.00000
4 hdd 2.18320 osd.4 up 1.00000 1.00000
0 ssd 1.74660 osd.0 up 1.00000 1.00000
1 ssd 1.74660 osd.1 up 1.00000 1.00000
2 ssd 1.74660 osd.2 up 1.00000 1.00000
-7 9.60619 host PX02
8 hdd 2.18320 osd.8 up 1.00000 1.00000
9 hdd 2.18320 osd.9 up 1.00000 1.00000
5 ssd 1.74660 osd.5 up 1.00000 1.00000
6 ssd 1.74660 osd.6 up 1.00000 1.00000
7 ssd 1.74660 osd.7 up 1.00000 1.00000
-10 11.78938 host PX03
13 hdd 2.18320 osd.13 up 1.00000 1.00000
14 hdd 2.18320 osd.14 up 1.00000 1.00000
21 hdd 2.18320 osd.21 down 0 1.00000
10 ssd 1.74660 osd.10 up 1.00000 1.00000
11 ssd 1.74660 osd.11 up 1.00000 1.00000
12 ssd 1.74660 osd.12 up 1.00000 1.00000
-13 9.60619 host PX04
18 hdd 2.18320 osd.18 up 1.00000 1.00000
19 hdd 2.18320 osd.19 up 1.00000 1.00000
15 ssd 1.74660 osd.15 up 1.00000 1.00000
16 ssd 1.74660 osd.16 up 1.00000 1.00000
17 ssd 1.74660 osd.17 up 1.00000 1.00000
20 hdd 0 osd.20 down 0 1.00000
 

mira

Proxmox Staff Member
Staff member
Aug 1, 2018
1,631
156
68
Is the OSD service running? systemctl status ceph-osd@20.service
Please provide the logs (/var/log/ceph/ceph.log, /var/log/ceph/ceph-osd.20.log) in addition to the output of the command above.
 

np-prxmx

Member
May 11, 2020
44
3
8
53
No, service isn't running. Because the procedure sutck on lvm preparation.

ceph.log is empty
 

Attachments

  • ceph-osd.20.log
    581 KB · Views: 0
  • ceph.log.1.gz
    570.1 KB · Views: 2

mira

Proxmox Staff Member
Staff member
Aug 1, 2018
1,631
156
68
That log is from December 17th, so no info regarding OSD 20.
Please provide the current one from that node.

And please provide the output of systemctl status ceph-osd@20.service

Please also provide the journal from that time(journalctl --since "2022-01-<DD> 00:00:00") if you have persistent journal enabled.
Otherwise please provide the syslog from that day (/var/log/syslog[.*]).
 
Last edited:

np-prxmx

Member
May 11, 2020
44
3
8
53
Hello, the process was in stuck this morning too. So, i execute this commands:
/usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-21
/usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph-edd10f0c-a2fd-40a6-bf2a-5163938cbc40/osd-block-5c13285a-e126-4de9-9e93-4587b1288f6f --path /var/lib/ceph/osd/ceph-21 --no-mon-config
/usr/bin/ln -snf /dev/ceph-edd10f0c-a2fd-40a6-bf2a-5163938cbc40/osd-block-5c13285a-e126-4de9-9e93-4587b1288f6f /var/lib/ceph/osd/ceph-21/block
/usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-21/block
/usr/bin/chown -R ceph:ceph /dev/dm-10
/usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-21
/usr/bin/systemctl enable ceph-volume@lvm-21-5c13285a-e126-4de9-9e93-4587b1288f6f
/lib/systemd/system/ceph-volume@.service.
/usr/bin/systemctl enable --runtime ceph-osd@21
/usr/bin/systemctl start ceph-osd@21


These are for another disk in same cluster.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get your own in 60 seconds.

Buy now!