Search results

  1. H

    [SOLVED] 3 node ceph - performance degraded due to bad disk? affecting other pool? crushmap?

    Big Than You to @RokaKen ! This issue is solved now: Swapping/changing to new SSDs made the ssd-pool responsive again The correction of the crush rules and the correct assignment to the respective pools enabled the full potential of the NVMe
  2. H

    [SOLVED] 3 node ceph - performance degraded due to bad disk? affecting other pool? crushmap?

    { "mon": { "ceph version 16.2.6 (1a6b9a05546f335eeeddb460fdc89caadf80ac7a) pacific (stable)": 3 }, "mgr": { "ceph version 16.2.6 (1a6b9a05546f335eeeddb460fdc89caadf80ac7a) pacific (stable)": 3 }, "osd": { "ceph version 16.2.6...
  3. H

    [SOLVED] 3 node ceph - performance degraded due to bad disk? affecting other pool? crushmap?

    Hi @aaron , ceph osd pool autoscale-status runs without any output.
  4. H

    [SOLVED] 3 node ceph - performance degraded due to bad disk? affecting other pool? crushmap?

    As per $HISTFILE the manual CRUSH rule for the SSD class was presumedly built by ceph osd crush rule create-replicated repl-ssd default host ssd. So this basically means, I create another CRUSH rule like so: ceph osd crush rule create-replicated repl-nvme default host nvme? And then edit the...
  5. H

    [SOLVED] 3 node ceph - performance degraded due to bad disk? affecting other pool? crushmap?

    So thanks a ton @RokaKen - this helps me to understand this issue and to link the clearly bad performing SSD to the issue with the NVMe pool (which apparently is not NVMe only)! Will start by changing the suspected bad/faulty SSD. Yes, the plan was to have strictly separated pools for the...
  6. H

    [SOLVED] 3 node ceph - performance degraded due to bad disk? affecting other pool? crushmap?

    Hi, @RokaKen , here it is: root@pve-node-01:~# ceph osd pool ls detail pool 1 'device_health_metrics' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 512 pgp_num 512 autoscale_mode on last_change 3612 lfor 0/0/73 flags hashpspool stripe_width 0 pg_num_min 1 application...
  7. H

    [SOLVED] 3 node ceph - performance degraded due to bad disk? affecting other pool? crushmap?

    Hi, I am Hans, I am using Proxmox for quite some time now and often found valuable help (reading) this community. Thanks a lot for so much valuable information! Today I have some questions which I could not help myself, so I am posting my 1st post :-) I recently inherited a 3 node...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!