Here is current tree from crushmap:
# ceph osd crush tree --show-shadow
ID CLASS WEIGHT TYPE NAME
-12 ssd 10.47583 root default~ssd
-9 ssd 2.61896 host cloud1~ssd
3 ssd 0.87299 osd.3
4 ssd 0.87299 osd.4
5 ssd 0.87299...
As I correctly understood, next steps should be taken:
1. ceph osd getcrushmap -o /tmp/mycrushmap
2. crushtool -d /tmp/mycrushmap > /tmp/mycrushmap.txt
3. Change ssd-pool-rule and hdd-pool-rule with (step chooseleaf firstn 0 type host):
rule ssd-pool-rule {
id 1
type replicated...
Hello,
Could you please advice on how to safely change the replica's to be on different hosts, instead of OSDs for next crush map (PVE 6.2):
# begin crush map
tunable choose_local_tries 0
tunable choose_local_fallback_tries 0
tunable choose_total_tries 50
tunable chooseleaf_descend_once 1...
Hello,
On some servers in cloud I see this error, while trying to check cephfs - content:
mount error: exit code 16 (500)
I have next package versions:
Hello,
I'd wish to understand if anyone here running production-grade Kubernates on proxmox (either in KVM or in LXC). I found many articles on web, but most of them are about test or small k8s small or test clusters.
As far as I understood (please correct me if I wrong - Kubernetes needs some...
Hello,
While editing server notes on clean install and trying to save note, I always get error:
unable to open file '/etc/pve/nodes/cloud1/config.tmp.1652' - Permission denied (500)
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.