Now my cluster is up and running WITH this option, from August 24th, BUT I've a mixed osds type environment: bluestore and filestore, is it likely that with only bluestore osds this option is not required?
Solved: reinserted the osd option, executed the scan-volume, fixed the json files, activated the volumes, manually started the mon.
Now the OSDs and the Mon are up and running, I can continue the upgrade on the other nodes...
Thank you very much Stoiko.
Mario
1) option added again, rebooted, not solved, option removed again;
2) I've restarted the OSDs only on one node (of 6) without success, if I run "systemctl restart ceph-osd.target" to restart OSDs on all nodes, all OSDs go down and VMs will be unresponsive. Do I need to shutdown ALL the VMs...
You are right, I missed it.
Now all the managers are running.
The problem persists for OSDs (filestore type): they are in "down/in" state and rebooting or issuing "systemctl restart ceph-osd.target" does not solve.
Note that in the old ceph.conf file there was the following option:
[osd]...
The hints are thousands, I paste the more interesting.
It seems to be a keyring/authentication problem after adapted the ceph.conf as stated in the guide.
journalctl -r:
...
Aug 22 08:46:00 c01 systemd[1]: Starting Proxmox VE replication runner...
Aug 22 08:45:26 c01 systemd[1]: Failed to...
Hello,
I'm following this guide https://pve.proxmox.com/wiki/Ceph_Luminous_to_Nautilus to upgrade Ceph on my 6 nodes cluster.
Everything goes fine till "systemctl restart ceph-mgr.target", at this point the 3 managers don't restart.
Issuing "ceph -s" shows:
services:
...
mgr: no daemons...
For me a reboot does not solve the problem, the network is up, /etc/pve is not available and in logs and on some commands I've:
ipcc_send_rec failed: Connection refused
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.