A couple days ago 2 of my 3 ceph nodes stopped working without any clever reason. I have tried to start osd's multiple times but every time osd service starts fine but after minute service got kill signal and restarts. At the moment I don't understand what can be wrong. Nothing changed and...
I have dedicated Docker and Kubernetes VM's running under proxmox. Hopefully those are implemented directly to Proxmox some day. I must say that I like Docker containers. No need to buy tanker because tiny boat is enough. :)
The final conclusion of mine... This was good disaster for me. I learned a lot about Ceph and realized that automatic snapshot/backups are very good thing. There is no safe update. Always something can make things pretty messy. I have made snapshots when I am going to update some VMs but now I...
Glad to hear your system is back on rails.
I suppose after update to Ceph 16.2.5 you don't need bluestore_allocator = bitmap value in configuration file anymore. It's easy to check wich version you are using by looking Ceph section under Proxmox or writing command ceph -v. It should tell you...
A couple of minutes ago I updated Ceph 16.2.4 to 16.2.5 and removed bluestore_allocator = bitmap value. Everything fine at the moment and system runs smoothly.
After command ceph crash archive-all the Ceph looks nice and clean again. If the situation is same next couple of days or week perhaps...
I can confirm that this work around for Ceph 16.2.4 solved my problem and my system is up and running again.
There is a file called /etc/ceph/ceph.conf to which I added the following two lines:
[osd]
bluestore_allocator = bitmap
After the Ceph 16.2.5 update, the system should work without...
Sure and thank you very much about your time.
[global]
auth_client_required = cephx
auth_cluster_required = cephx
auth_service_required = cephx
cluster_network = 192.168.199.0/24
fsid = e5afa215-2a06-49c4-9b68-f0d708f68ffa
mon_allow_pool_delete = true
mon_host...
Looks like I have to wait 16.2.5 because because I don't have an great idea how step backwards to Octopus. I made stupid mistake to ask ceph allow only pacific and I think it's not possible to downgrade.... :rolleyes:
This is situation of mine:
root@pve2:/dev# ceph crash info...
Perhaps you are right. Just wondering that ceph has worked like charm until Pacific update when this melt happened... There must be something fundamentally different than before but just don't understand what it could be... Actually I don't find the reason under network configuration also...
Well backups are a couple weeks old so it would be nice to push this work for at least to save current state of the virtual machines. Before the update pools/OSDs worked without issues or alerts about nearfull state and there was not any kind of note that pool usage state may affect somehow.
Just wonderin is there any work around to save files and start over if fixing is impossible... At this moment I have read a lot of ceph and proxmox forums but don't find any clever tips how to fix this situation. Even I would be very pleased if I even can save the data some how. I don't have any...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.