[SOLVED] ceph pools status unknown

Oct 23, 2020
83
3
13
32
Hi everyone! I have 4 nodes in cluster and I've succesfully created 2 ceph storage by type SSD and HDD. After mapping PG's I can see volume of storages, but status unknown. I used this tutorial
At the moment I can create VM on this pools, and status of ceph is green without alerts

My crush maps rules
Code:
# rules
rule replicated_rule {
    id 0
    type replicated
    min_size 1
    max_size 10
    step take default
    step chooseleaf firstn 0 type host
    step emit
}
rule replicated_hdd {
    id 1
    type replicated
    min_size 1
    max_size 10
    step take default class hdd
    step chooseleaf firstn 0 type host
    step emit
}
rule replicated_ssd {
    id 2
    type replicated
    min_size 1
    max_size 10
    step take default class ssd
    step chooseleaf firstn 0 type host
    step emit
}

# end crush map

System
Kernel Version Linux 5.4.128-1-pve #1 SMP PVE 5.4.128-1 (Wed, 21 Jul 2021 18:32:02 +0200) PVE Manager Version pve-manager/6.4-13/9f411e79
 
All faced the same issue a few days ago. With reinstalled node.
After executing @IDemoNI solution
Code:
systemctl restart pvestatd.service
Then status went from unknown to available :) :cool:
 
  • Like
Reactions: lDemoNl

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!