Recent content by sippe

  1. S

    ceph 16.2.9 - OSD endless restart loop - multiple pgs stuck

    Solved. The reason was corrupted and too many times updated linux base system. I reinstalled whole Proxmox cluster and now everything is fine.
  2. S

    ceph 16.2.9 - OSD endless restart loop - multiple pgs stuck

    A couple days ago 2 of my 3 ceph nodes stopped working without any clever reason. I have tried to start osd's multiple times but every time osd service starts fine but after minute service got kill signal and restarts. At the moment I don't understand what can be wrong. Nothing changed and...
  3. S

    [SOLVED] CEPH MON fail after upgrade

    Thank you. Works like a charm! -Mikael
  4. S

    [SOLVED] CEPH MON fail after upgrade

    Hello, Same problem here. After upgrade CEPH jammed. Monitors and managers won't start anymore. Proxmox 7.0 to 7.1 CEPH pacific 16.2.5 to 16.2.7 root@pve1:/etc/pve# systemctl status ceph\*.service ceph\*.target ● ceph-mon@pve1.service - Ceph cluster monitor daemon Loaded: loaded...
  5. S

    ceph 16.2 pacific cluster crash

    I have dedicated Docker and Kubernetes VM's running under proxmox. Hopefully those are implemented directly to Proxmox some day. I must say that I like Docker containers. No need to buy tanker because tiny boat is enough. :)
  6. S

    ceph 16.2 pacific cluster crash

    The final conclusion of mine... This was good disaster for me. I learned a lot about Ceph and realized that automatic snapshot/backups are very good thing. There is no safe update. Always something can make things pretty messy. I have made snapshots when I am going to update some VMs but now I...
  7. S

    ceph 16.2 pacific cluster crash

    Glad to hear your system is back on rails. I suppose after update to Ceph 16.2.5 you don't need bluestore_allocator = bitmap value in configuration file anymore. It's easy to check wich version you are using by looking Ceph section under Proxmox or writing command ceph -v. It should tell you...
  8. S

    [SOLVED] Ceph Pacific Cluster Crash Shortly After Upgrade

    Yes. This is only hobby/demo/proto system and if some node fails the HA will move VMs to another node. Actually it works pretty well. :rolleyes: :D
  9. S

    [SOLVED] Ceph Pacific Cluster Crash Shortly After Upgrade

    A couple of minutes ago I updated Ceph 16.2.4 to 16.2.5 and removed bluestore_allocator = bitmap value. Everything fine at the moment and system runs smoothly. After command ceph crash archive-all the Ceph looks nice and clean again. If the situation is same next couple of days or week perhaps...
  10. S

    [SOLVED] Ceph Pacific Cluster Crash Shortly After Upgrade

    Seems that set_numa_affinity unable to identify public interface 'some_inerface' is bug also but harmless so it can be ignored...
  11. S

    [SOLVED] Ceph Pacific Cluster Crash Shortly After Upgrade

    I can confirm that this work around for Ceph 16.2.4 solved my problem and my system is up and running again. There is a file called /etc/ceph/ceph.conf to which I added the following two lines: [osd] bluestore_allocator = bitmap After the Ceph 16.2.5 update, the system should work without...
  12. S

    ceph 16.2 pacific cluster crash

    Thank you spirit! Work around fixed the problem and system is up and running. So the conclusion was right! Super. :)
  13. S

    [SOLVED] Ceph Pacific Cluster Crash Shortly After Upgrade

    Here is more information about Ceph 16.2.5 :)
  14. S

    ceph 16.2 pacific cluster crash

    Pardon me... Where do I need to put this "bluestore_allocator = bitmap" that it work like it should be? :oops::)
  15. S

    [SOLVED] Ceph Pacific Cluster Crash Shortly After Upgrade

    Sure and thank you very much about your time. [global] auth_client_required = cephx auth_cluster_required = cephx auth_service_required = cephx cluster_network = 192.168.199.0/24 fsid = e5afa215-2a06-49c4-9b68-f0d708f68ffa mon_allow_pool_delete = true mon_host...