ceph crush

  1. W

    Ceph Stuck with active+clean+remapped pgs

    I don't know how to fix this. I'm just starting out with ceph. It just keeps on showing active+clean+remapped. It doesn't fix it over time. How do I fix this? I just use the default replication rule for my pools.
  2. D

    Ceph Health Warning

    I am running a cluster with 6 nodes on VE 5.4-3. Suddenly ceph through a health warning and now whole cluster is unusable. I am not quite sure why and haven't found similar info on site. Can anyone help please?
  3. D

    Ceph OSD keeps failing!

    Hi, Recent updates have made ceph started to act very weird: we keep loosing one OSD with following from syslog: 2020-10-17 04:28:21.922478 mon.n02-sxb-pve01 (mon.0) 912 : cluster [INF] osd.6 [v2:172.17.1.2:6814/308596,v1:172.17.1.2:6817/308596] boot 2020-10-17 04:28:23.919914...
  4. S

    Ceph crush retention after reboot

    Hi all, I have a problem with my ceph installation, where my crush table changes after a reboot. I have configured my ceph crush like this (i am giving the relevant config): host px1 { item osd.1 weight 1.637 item osd.2 weight 1.637 item osd.3 weight 1.637 item osd.4 weight...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!