Search results

  1. P

    Ceph 19.2.1 2 OSD(s) experiencing slow operations in BlueStore

    I have to say, that I also found this documentation and I did set bluestore_slow_ops_warn_threshold per problematic OSD and warning is gone ! So it is really seems to be a feature ....
  2. P

    Ceph 19.2.1 2 OSD(s) experiencing slow operations in BlueStore

    Anyone tried this ? https://github.com/rook/rook/discussions/15403 Specially this two: ceph config set global bdev_async_discard_threads 1 ceph config set global bdev_enable_discard true
  3. P

    Ceph 19.2.1 2 OSD(s) experiencing slow operations in BlueStore

    I am not able to do snapshots on RBD storage, another bug: https://tracker.ceph.com/issues/61582?next_issue_id=61581 I have had using Ceph long time, but, this going to be worse and worse ...
  4. P

    Ceph 19.2.1 2 OSD(s) experiencing slow operations in BlueStore

    I have upgraded too to 19.2.2 before weekend, but no luck: HEALTH_WARN: 2 OSD(s) experiencing slow operations in BlueStore osd.9 observed slow operation indications in BlueStore osd.15 observed slow operation indications in BlueStore
  5. P

    Ceph 19.2.1 2 OSD(s) experiencing slow operations in BlueStore

    wuwzy: You were right, from time to time, osd is slow... clean itself ... come again ... , etc ...
  6. P

    Ceph 19.2.1 2 OSD(s) experiencing slow operations in BlueStore

    Hi all, I am just observing and witing for a real patch, I have did not "repair" CEPH health anyhow, but .... All warnings did disappeared a 3-4 days ago and CEPH health is green ... So I am confused ... :-)
  7. P

    Ceph 19.2.1 2 OSD(s) experiencing slow operations in BlueStore

    root@pve1:~# pveceph pool get cephfs_data ┌────────────────────────┬─────────────────┐ │ key │ value │ ╞════════════════════════╪═════════════════╡ │ crush_rule │ replicated_rule │ ├────────────────────────┼─────────────────┤ │ fast_read │ 0...
  8. P

    Ceph 19.2.1 2 OSD(s) experiencing slow operations in BlueStore

    Ceph bug ? https://github.com/rook/rook/discussions/15403
  9. P

    Ceph 19.2.1 2 OSD(s) experiencing slow operations in BlueStore

    Hi, guys same issue here after upgrade to CEPH 19.2.1... Environment 3 nodes: proxmox-ve: 8.4.0 (running kernel: 6.8.12-9-pve) pve-manager: 8.4.1 ceph: 19.2.1-pve3 HEALTH_WARN: 2 OSD(s) experiencing slow operations in BlueStore osd.9 observed slow operation indications in BlueStore osd.15...
  10. P

    [SOLVED] Backup (vzsnap) fails after Update to ceph 17.2.6

    This post should NOT to be solved ... in my opinion ...
  11. P

    [SOLVED] Backup (vzsnap) fails after Update to ceph 17.2.6

    Uff, so It was good idea NOT to recreate storage .... I am watching this: https://tracker.ceph.com/issues/61582?next_issue_id=61581 And I seems that nobody cares ... I suppose that combination LXC + CEPH has no many "users" ..... I also had big troubles with LXC and nfs mount under...
  12. P

    [SOLVED] Backup (vzsnap) fails after Update to ceph 17.2.6

    Hi, I have exact the same problem. After 3 months I do not see any progress here: https://tracker.ceph.com/issues/61582?next_issue_id=61581 So I suppose this issue is not solved, Am I right ? Thanks for the reply. :) PS: I have empty cephfs on my pool. but I am scared to delete it.
  13. P

    VM shutdown, KVM: entry failed, hardware error 0x80000021

    OK. To be more specific, we use for example 4210, which is the same "Cascade Lake " like your 4215 ... Ok. Thanks for the info ...
  14. P

    VM shutdown, KVM: entry failed, hardware error 0x80000021

    Hi all, we have many PVE servers. Recently I have upgraded all of them to latest PVE 7.2.4, 7.2.5 among with the latest kernel 5.15. All the servers have Xeon(R) Silver 41XX processors, and we have NO issue with VM destroy mentioned here. Servers are mostly Supermicro or HP. But we also have...
  15. P

    VM shutdown, KVM: entry failed, hardware error 0x80000021

    Pinning kernel: proxmox-boot-tool kernel pin 5.13.19-6-pve Did not force the kernel 5.13.19-6-pve to be booted first .. So check, If you REALY run this kernel after reboot ...
  16. P

    PVE provisioned disk space in GUI is missing ....

    Yes: " is to keep a close eye on them" this is exactly what i do NOT want to do :). Because if more admins take care about the pool, they always must have on their minds: I must count what is full provisioned, when I want to make a new VM, or expand the disk for a some VM. And when after login...
  17. P

    PVE provisioned disk space in GUI is missing ....

    Hello, can anybody from proxmox answer ? Thanks a lot ...