Recent content by Petr Svacina

  1. P

    Ceph 19.2.1 2 OSD(s) experiencing slow operations in BlueStore

    root@pve1:~# pveceph pool get cephfs_data ┌────────────────────────┬─────────────────┐ │ key │ value │ ╞════════════════════════╪═════════════════╡ │ crush_rule │ replicated_rule │ ├────────────────────────┼─────────────────┤ │ fast_read │ 0...
  2. P

    Ceph 19.2.1 2 OSD(s) experiencing slow operations in BlueStore

    Ceph bug ? https://github.com/rook/rook/discussions/15403
  3. P

    Ceph 19.2.1 2 OSD(s) experiencing slow operations in BlueStore

    Hi, guys same issue here after upgrade to CEPH 19.2.1... Environment 3 nodes: proxmox-ve: 8.4.0 (running kernel: 6.8.12-9-pve) pve-manager: 8.4.1 ceph: 19.2.1-pve3 HEALTH_WARN: 2 OSD(s) experiencing slow operations in BlueStore osd.9 observed slow operation indications in BlueStore osd.15...
  4. P

    [SOLVED] Backup (vzsnap) fails after Update to ceph 17.2.6

    This post should NOT to be solved ... in my opinion ...
  5. P

    [SOLVED] Backup (vzsnap) fails after Update to ceph 17.2.6

    Uff, so It was good idea NOT to recreate storage .... I am watching this: https://tracker.ceph.com/issues/61582?next_issue_id=61581 And I seems that nobody cares ... I suppose that combination LXC + CEPH has no many "users" ..... I also had big troubles with LXC and nfs mount under...
  6. P

    [SOLVED] Backup (vzsnap) fails after Update to ceph 17.2.6

    Hi, I have exact the same problem. After 3 months I do not see any progress here: https://tracker.ceph.com/issues/61582?next_issue_id=61581 So I suppose this issue is not solved, Am I right ? Thanks for the reply. :) PS: I have empty cephfs on my pool. but I am scared to delete it.
  7. P

    VM shutdown, KVM: entry failed, hardware error 0x80000021

    OK. To be more specific, we use for example 4210, which is the same "Cascade Lake " like your 4215 ... Ok. Thanks for the info ...
  8. P

    VM shutdown, KVM: entry failed, hardware error 0x80000021

    Hi all, we have many PVE servers. Recently I have upgraded all of them to latest PVE 7.2.4, 7.2.5 among with the latest kernel 5.15. All the servers have Xeon(R) Silver 41XX processors, and we have NO issue with VM destroy mentioned here. Servers are mostly Supermicro or HP. But we also have...
  9. P

    VM shutdown, KVM: entry failed, hardware error 0x80000021

    Pinning kernel: proxmox-boot-tool kernel pin 5.13.19-6-pve Did not force the kernel 5.13.19-6-pve to be booted first .. So check, If you REALY run this kernel after reboot ...
  10. P

    PVE provisioned disk space in GUI is missing ....

    Yes: " is to keep a close eye on them" this is exactly what i do NOT want to do :). Because if more admins take care about the pool, they always must have on their minds: I must count what is full provisioned, when I want to make a new VM, or expand the disk for a some VM. And when after login...
  11. P

    PVE provisioned disk space in GUI is missing ....

    Hello, can anybody from proxmox answer ? Thanks a lot ...
  12. P

    Struggling to Migrate Windows guests from XEN to Proxmox

    To BeDazzler: Sorry man, I was using Citrix xenserver for the years and this is definitely Citrix error. Their VM drivers are the pain in the ass and has this issue over the hypervisors world. KVM drivers (virtio) are clean and simple and works at least :-) So it is hard to solve problem here...
  13. P

    Struggling to Migrate Windows guests from XEN to Proxmox

    I suppose XENFLT is typing error :-) Sorry for confusing. But you did the trick :-)