CephStorage add new Hosts with new OSD to Pool Fails

felixheilig

Member
Jul 6, 2021
11
0
6
26
Hello everyone,
I'm running into the following scenario:
Ceph, 3 hosts, same configuration, each with a 2TB SSD as OSD in a Ceph pool.
A 1 TB CephPool.

>> It works so far.

Now I wanted to add a host with an OSD to the Ceph pool, the same equipment as the other three hosts.
Not to expand the Storage, just for more Redundancy.

Proxmox installed, host added to Proxmox cluster.
Installed Ceph, added new host as monitor and and standby manager.OSD disk created/added to the Ceph cluster.

All good so far.

But: After about 5 minutes, the status of the OSD changes to down/in.

What am I doing wrong here ?

Code:
proxmox-ve: 7.4-1 (running kernel: 5.15.102-1-pve)
pve-manager: 7.4-3 (running version: 7.4-3/9002ab8a)
pve-kernel-5.15: 7.3-3
pve-kernel-5.15.102-1-pve: 5.15.102-1
ceph: 17.2.6-pve1
ceph-fuse: 17.2.6-pve1
corosync: 3.1.7-pve1
criu: 3.15-1+pve-1
glusterfs-client: 9.2-1
ifupdown2: 3.1.0-1+pmx3
ksm-control-daemon: 1.4-1
libjs-extjs: 7.0.0-1
libknet1: 1.24-pve2
libproxmox-acme-perl: 1.4.4
libproxmox-backup-qemu0: 1.3.1-1
libproxmox-rs-perl: 0.2.1
libpve-access-control: 7.4-1
libpve-apiclient-perl: 3.2-1
libpve-common-perl: 7.3-3
libpve-guest-common-perl: 4.2-4
libpve-http-server-perl: 4.2-1
libpve-rs-perl: 0.7.5
libpve-storage-perl: 7.4-2
libspice-server1: 0.14.3-2.1
lvm2: 2.03.11-2.1
lxc-pve: 5.0.2-2
lxcfs: 5.0.3-pve1
novnc-pve: 1.4.0-1
proxmox-backup-client: 2.3.3-1
proxmox-backup-file-restore: 2.3.3-1
proxmox-kernel-helper: 7.4-1
proxmox-mail-forward: 0.1.1-1
proxmox-mini-journalreader: 1.3-1
proxmox-widget-toolkit: 3.6.3
pve-cluster: 7.3-3
pve-container: 4.4-3
pve-docs: 7.4-2
pve-edk2-firmware: 3.20221111-1
pve-firewall: 4.3-1
pve-firmware: 3.6-4
pve-ha-manager: 3.6.0
pve-i18n: 2.11-1
pve-qemu-kvm: 7.2.0-8
pve-xtermjs: 4.16.0-1
qemu-server: 7.4-2
smartmontools: 7.2-pve3
spiceterm: 3.2-2
swtpm: 0.8.0~bpo11+3
vncterm: 1.7-1
zfsutils-linux: 2.1.9-pve1

Code:
  cluster:
    id:     bcff2142-9dbd-4ed3-9ea9-89b0b4b97ba0
    health: HEALTH_WARN
            Degraded data redundancy: 31744/240711 objects degraded (13.188%), 59 pgs degraded
 
  services:
    mon: 4 daemons, quorum Kamino04,Kamino05,Kamino06,Kamino07 (age 29m)
    mgr: Kamino04(active, since 2w), standbys: Kamino05, Kamino06, Kamino07
    osd: 4 osds: 3 up (since 10m), 3 in (since 30s); 59 remapped pgs
 
  data:
    pools:   4 pools, 193 pgs
    objects: 80.24k objects, 311 GiB
    usage:   808 GiB used, 4.8 TiB / 5.6 TiB avail
    pgs:     31744/240711 objects degraded (13.188%)
             134 active+clean
             31  active+undersized+degraded+remapped+backfill_wait
             28  active+undersized+degraded+remapped+backfilling
 
  io:
    client:   341 B/s rd, 36 KiB/s wr, 0 op/s rd, 4 op/s wr
    recovery: 778 MiB/s, 197 objects/s
Code:
--- RAW STORAGE ---
CLASS     SIZE    AVAIL     USED  RAW USED  %RAW USED
ssd    5.6 TiB  4.8 TiB  843 GiB   843 GiB      14.73
TOTAL  5.6 TiB  4.8 TiB  843 GiB   843 GiB      14.73
 
--- POOLS ---
POOL             ID  PGS   STORED  OBJECTS     USED  %USED  MAX AVAIL
.mgr              1    1  1.3 MiB        2  3.8 MiB      0    2.0 TiB
cephfs_data       2   32      0 B        0      0 B      0    2.0 TiB
cephfs_metadata   3   32   33 KiB       22  216 KiB      0    2.0 TiB
Cehp-Pool         4  128  382 GiB   80.21k  1.0 TiB  14.65    2.2 TiB
 

Attachments

  • OSD_Ceph_warn.png
    OSD_Ceph_warn.png
    36.4 KB · Views: 1
  • ceph.log
    75.4 KB · Views: 1

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!