[SOLVED] Proxmox Ceph Struck at

ermanishchawla

Well-Known Member
Mar 23, 2020
332
37
48
39
I am getting following error on newly create ceph cluster, its been 8 hours since i created new Pool

Code:
cluster:
    id:     1411ea40-86a9-4ff5-aabf-74213d6e0785
    health: HEALTH_WARN
            Degraded data redundancy: 1030 pgs undersized
 
  services:
    mon: 8 daemons, quorum inc1pve17,inc1pve18,inc1pve19,inc1pve20,inc1pve21,inc1pve22,inc1pve23,inc1pve24 (age 5h)
    mgr: inc1pve23(active, since 5h), standbys: inc1pve20, inc1pve22, inc1pve21, inc1pve24, inc1pve18, inc1pve17, inc1pve19
    osd: 32 osds: 32 up (since 4h), 32 in (since 4h); 1018 remapped pgs
 
  data:
    pools:   1 pools, 2048 pgs
    objects: 0 objects, 0 B
    usage:   33 GiB used, 56 TiB / 56 TiB avail
    pgs:     1030 active+undersized
             1018 active+clean+remapped

Pveversion
proxmox-ve: 6.2-1 (running kernel: 5.4.44-2-pve)


pve-manager: 6.2-10 (running version: 6.2-10/a20769ed)


pve-kernel-5.4: 6.2-4


pve-kernel-helper: 6.2-4


pve-kernel-5.0: 6.0-11


pve-kernel-5.4.44-2-pve: 5.4.44-2


pve-kernel-5.4.41-1-pve: 5.4.41-1


pve-kernel-5.0.21-5-pve: 5.0.21-10


pve-kernel-5.0.15-1-pve: 5.0.15-1


ceph: 14.2.9-pve1


ceph-fuse: 14.2.9-pve1


corosync: 3.0.4-pve1


criu: 3.11-3


glusterfs-client: 5.5-3


ifupdown: residual config


ifupdown2: 3.0.0-1+pve2


ksm-control-daemon: 1.3-1


libjs-extjs: 6.0.1-10


libknet1: 1.16-pve1


libproxmox-acme-perl: 1.0.4


libpve-access-control: 6.1-2


libpve-apiclient-perl: 3.0-3


libpve-common-perl: 6.1-5


libpve-guest-common-perl: 3.1-1


libpve-http-server-perl: 3.0-6


libpve-storage-perl: 6.2-5


libqb0: 1.0.5-1


libspice-server1: 0.14.2-4~pve6+1


lvm2: 2.03.02-pve4


lxc-pve: 4.0.2-1


lxcfs: 4.0.3-pve3


novnc-pve: 1.1.0-1


proxmox-mini-journalreader: 1.1-1


proxmox-widget-toolkit: 2.2-9


pve-cluster: 6.1-8


pve-container: 3.1-12


pve-docs: 6.2-5


pve-edk2-firmware: 2.20200531-1


pve-firewall: 4.1-2


pve-firmware: 3.1-1


pve-ha-manager: 3.0-9


pve-i18n: 2.1-3


pve-qemu-kvm: 5.0.0-11


pve-xtermjs: 4.3.0-1


qemu-server: 6.2-11


smartmontools: 7.1-pve2


spiceterm: 3.1-1


vncterm: 1.6-1


zfsutils-linux: 0.8.4-pve1
 
Problem is solved by reapplying crush-map
using following command

ceph osd setcrushmap -i crush2.bin

where crush2 is my crushmap downloaded earlier
 
Seems solved. If so, please mark the thread a solved. Thanks.