Ceph Storage status is showing unknown

areddy

New Member
Apr 3, 2021
1
1
3
32
We have 11 Node cluster, we see that the ceph storage status has changes to uknown, need help to being the ceph storage back to online
 

aaron

Proxmox Staff Member
Staff member
Jun 3, 2019
2,456
345
88
Well, if you want people to help, you should provide some information :)

For example, what are the outputs of the following commands?
ceph -s
pvesm status
 

Dave Wood

Active Member
Jan 9, 2017
31
1
28
41
Hi,

I have similar issue. I'm running Proxmox 6.4-1. CEPH and NFS storage are working but showing unknown status. Any help will be appreciated.
Screenshot from 2021-05-31 12-32-14.png

Code:
# pveversion -v
proxmox-ve: 6.4-1 (running kernel: 5.4.114-1-pve)
pve-manager: 6.4-6 (running version: 6.4-6/be2fa32c)
pve-kernel-5.4: 6.4-2
pve-kernel-helper: 6.4-2
pve-kernel-5.4.114-1-pve: 5.4.114-1
pve-kernel-5.4.106-1-pve: 5.4.106-1
ceph: 15.2.11-pve1
ceph-fuse: 15.2.11-pve1
corosync: 3.1.2-pve1
criu: 3.11-3
glusterfs-client: 5.5-3
ifupdown: 0.8.35+pve1
ksm-control-daemon: 1.3-1
libjs-extjs: 6.0.1-10
libknet1: 1.20-pve1
libproxmox-acme-perl: 1.1.0
libproxmox-backup-qemu0: 1.0.3-1
libpve-access-control: 6.4-1
libpve-apiclient-perl: 3.1-3
libpve-common-perl: 6.4-3
libpve-guest-common-perl: 3.1-5
libpve-http-server-perl: 3.2-2
libpve-storage-perl: 6.4-1
libqb0: 1.0.5-1
libspice-server1: 0.14.2-4~pve6+1
lvm2: 2.03.02-pve4
lxc-pve: 4.0.6-2
lxcfs: 4.0.6-pve1
novnc-pve: 1.1.0-1
proxmox-backup-client: 1.1.6-2
proxmox-mini-journalreader: 1.1-1
proxmox-widget-toolkit: 2.5-5
pve-cluster: 6.4-1
pve-container: 3.3-5
pve-docs: 6.4-2
pve-edk2-firmware: 2.20200531-1
pve-firewall: 4.1-3
pve-firmware: 3.2-3
pve-ha-manager: 3.1-1
pve-i18n: 2.3-1
pve-qemu-kvm: 5.2.0-6
pve-xtermjs: 4.7.0-3
qemu-server: 6.4-2
smartmontools: 7.2-pve2
spiceterm: 3.1-1
vncterm: 1.6-2
zfsutils-linux: 2.0.4-pve1

Code:
# pvesm status
Name                   Type     Status           Total            Used       Available        %
ceph-hdd                rbd     active      2768744230        23429670      2745314560    0.85%
dell-nfs                nfs   inactive               0               0               0    0.00%
fusion-io-pve01         lvm     active      1117917184               0      1117917184    0.00%
fusion-io-pve02         lvm   disabled               0               0               0      N/A
fusion-io-pve03         lvm   disabled               0               0               0      N/A
local                   dir     active        34829920         4000892        29030068   11.49%
local-lvm           lvmthin     active        79896576               0        79896576    0.00%

# ceph -s
  cluster:
    id:     e2595d99-90bb-40da-9ff0-c4f1c2b429a4
    health: HEALTH_OK
 
  services:
    mon: 3 daemons, quorum pve03,pve01,pve02 (age 11h)
    mgr: pve03(active, since 11h)
    osd: 6 osds: 6 up (since 9h), 6 in (since 9h)
 
  data:
    pools:   1 pools, 32 pgs
    objects: 6.39k objects, 25 GiB
    usage:   51 GiB used, 5.4 TiB / 5.5 TiB avail
    pgs:     32 active+clean
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get your own in 60 seconds.

Buy now!