Ok I have restarted OSD 2 on the host the service is active running but still shows down. The journal message is the same as before. What do you mean log of it?
I thought so, just had to ask. Thanks for the help in advance.
Yes the link is present and if I change the file on say W1 the changes are pushed to the others.
Here is the results of an osd per host
root@W1:~# journalctl -u ceph-osd@0.service
-- Logs begin at Tue 2021-01-12 08:49:17 SAST, end at Tue 2021-01-12 12:07:35 SAST. --
Jan 12 08:49:22 W1...
I see I uploaded the old ceph.conf file here is the current.
[global]
auth client required = cephx
auth cluster required = cephx
auth service required = cephx
cluster network = 10.10.10.0/24
fsid = 3d6cfbaa-c7ac-447a-843d-9795f9ab4276
mon...
Here are some errors that I foundin the log files on the hosts.
On W2,W3 and W4 I get this error in ceph-volume.log
Traceback (most recent call last):
File "/usr/lib/python3/dist-packages/ceph_volume/main.py", line 144, in main
conf.ceph = configuration.load(conf.path)
File...
Hi All
I have done the ceph upgrade on 2 of our clusters according to Proxmox Nautilus to Octopus upgrade procedure. The first cluster works perfect without any problems but the second cluster doesn't work. The upgrade completed until the point where you restart the OSD's, it then stopped with...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.