Search results

  1. S

    ceph 16.2.9 - OSD endless restart loop - multiple pgs stuck

    Solved. The reason was corrupted and too many times updated linux base system. I reinstalled whole Proxmox cluster and now everything is fine.
  2. S

    ceph 16.2.9 - OSD endless restart loop - multiple pgs stuck

    A couple days ago 2 of my 3 ceph nodes stopped working without any clever reason. I have tried to start osd's multiple times but every time osd service starts fine but after minute service got kill signal and restarts. At the moment I don't understand what can be wrong. Nothing changed and...
  3. S

    [SOLVED] CEPH MON fail after upgrade

    Thank you. Works like a charm! -Mikael
  4. S

    [SOLVED] CEPH MON fail after upgrade

    Hello, Same problem here. After upgrade CEPH jammed. Monitors and managers won't start anymore. Proxmox 7.0 to 7.1 CEPH pacific 16.2.5 to 16.2.7 root@pve1:/etc/pve# systemctl status ceph\*.service ceph\*.target ● ceph-mon@pve1.service - Ceph cluster monitor daemon Loaded: loaded...
  5. S

    ceph 16.2 pacific cluster crash

    I have dedicated Docker and Kubernetes VM's running under proxmox. Hopefully those are implemented directly to Proxmox some day. I must say that I like Docker containers. No need to buy tanker because tiny boat is enough. :)
  6. S

    ceph 16.2 pacific cluster crash

    The final conclusion of mine... This was good disaster for me. I learned a lot about Ceph and realized that automatic snapshot/backups are very good thing. There is no safe update. Always something can make things pretty messy. I have made snapshots when I am going to update some VMs but now I...
  7. S

    ceph 16.2 pacific cluster crash

    Glad to hear your system is back on rails. I suppose after update to Ceph 16.2.5 you don't need bluestore_allocator = bitmap value in configuration file anymore. It's easy to check wich version you are using by looking Ceph section under Proxmox or writing command ceph -v. It should tell you...
  8. S

    [SOLVED] Ceph Pacific Cluster Crash Shortly After Upgrade

    Yes. This is only hobby/demo/proto system and if some node fails the HA will move VMs to another node. Actually it works pretty well. :rolleyes: :D
  9. S

    [SOLVED] Ceph Pacific Cluster Crash Shortly After Upgrade

    A couple of minutes ago I updated Ceph 16.2.4 to 16.2.5 and removed bluestore_allocator = bitmap value. Everything fine at the moment and system runs smoothly. After command ceph crash archive-all the Ceph looks nice and clean again. If the situation is same next couple of days or week perhaps...
  10. S

    [SOLVED] Ceph Pacific Cluster Crash Shortly After Upgrade

    Seems that set_numa_affinity unable to identify public interface 'some_inerface' is bug also but harmless so it can be ignored...
  11. S

    [SOLVED] Ceph Pacific Cluster Crash Shortly After Upgrade

    I can confirm that this work around for Ceph 16.2.4 solved my problem and my system is up and running again. There is a file called /etc/ceph/ceph.conf to which I added the following two lines: [osd] bluestore_allocator = bitmap After the Ceph 16.2.5 update, the system should work without...
  12. S

    ceph 16.2 pacific cluster crash

    Thank you spirit! Work around fixed the problem and system is up and running. So the conclusion was right! Super. :)
  13. S

    [SOLVED] Ceph Pacific Cluster Crash Shortly After Upgrade

    Here is more information about Ceph 16.2.5 :)
  14. S

    ceph 16.2 pacific cluster crash

    Pardon me... Where do I need to put this "bluestore_allocator = bitmap" that it work like it should be? :oops::)
  15. S

    [SOLVED] Ceph Pacific Cluster Crash Shortly After Upgrade

    Sure and thank you very much about your time. [global] auth_client_required = cephx auth_cluster_required = cephx auth_service_required = cephx cluster_network = 192.168.199.0/24 fsid = e5afa215-2a06-49c4-9b68-f0d708f68ffa mon_allow_pool_delete = true mon_host...
  16. S

    ceph 16.2 pacific cluster crash

    Looks like I have to wait 16.2.5 because because I don't have an great idea how step backwards to Octopus. I made stupid mistake to ask ceph allow only pacific and I think it's not possible to downgrade.... :rolleyes: This is situation of mine: root@pve2:/dev# ceph crash info...
  17. S

    [SOLVED] Ceph Pacific Cluster Crash Shortly After Upgrade

    Mkay... I found something intresting: Under first node... root@pve1:~# systemctl status ceph-osd@0 ● ceph-osd@0.service - Ceph object storage daemon osd.0 Loaded: loaded (/lib/systemd/system/ceph-osd@.service; enabled-runtime; vendor preset: enabled) Drop-In...
  18. S

    [SOLVED] Ceph Pacific Cluster Crash Shortly After Upgrade

    Perhaps you are right. Just wondering that ceph has worked like charm until Pacific update when this melt happened... There must be something fundamentally different than before but just don't understand what it could be... Actually I don't find the reason under network configuration also...
  19. S

    [SOLVED] Ceph Pacific Cluster Crash Shortly After Upgrade

    Well backups are a couple weeks old so it would be nice to push this work for at least to save current state of the virtual machines. Before the update pools/OSDs worked without issues or alerts about nearfull state and there was not any kind of note that pool usage state may affect somehow.
  20. S

    [SOLVED] Ceph Pacific Cluster Crash Shortly After Upgrade

    Just wonderin is there any work around to save files and start over if fixing is impossible... At this moment I have read a lot of ceph and proxmox forums but don't find any clever tips how to fix this situation. Even I would be very pleased if I even can save the data some how. I don't have any...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!