Configuration: 6 nodes with Ceph and Proxmox 5.
I am currently upgrading Proxmox to version 6 (running corosync version 3 now). I did NOT update Ceph yet. But after upgrading the first node I get this Ceph error, but everything is still up except these osd's:
systemctl status ceph-disk@dev-sdb1.service
● ceph-disk@dev-sdb1.service - Ceph disk activation: /dev/sdb1
Loaded: loaded (/lib/systemd/system/ceph-disk@.service; static; vendor preset: enabled)
Drop-In: /lib/systemd/system/ceph-disk@.service.d
└─ceph-after-pve-cluster.conf
Active: inactive (dead)
=====
systemctl status ceph-disk@dev-sde1.service
● ceph-disk@dev-sde1.service - Ceph disk activation: /dev/sde1
Loaded: loaded (/lib/systemd/system/ceph-disk@.service; static; vendor preset: enabled)
Drop-In: /lib/systemd/system/ceph-disk@.service.d
└─ceph-after-pve-cluster.conf
Active: failed (Result: exit-code) since Thu 2019-10-24 19:32:04 CEST; 34min ago
Main PID: 3600 (code=exited, status=1/FAILURE)
Oct 24 19:32:04 node002 sh[3600]: File "/usr/lib/python2.7/dist-packages/ceph_disk/main.py", line 5736, in run
Oct 24 19:32:04 node002 sh[3600]: main(sys.argv[1:])
Oct 24 19:32:04 node002 sh[3600]: File "/usr/lib/python2.7/dist-packages/ceph_disk/main.py", line 5687, in main
Oct 24 19:32:04 node002 sh[3600]: args.func(args)
Oct 24 19:32:04 node002 sh[3600]: File "/usr/lib/python2.7/dist-packages/ceph_disk/main.py", line 4890, in main_trigger
Oct 24 19:32:04 node002 sh[3600]: raise Error('return code ' + str(ret))
Oct 24 19:32:04 node002 sh[3600]: ceph_disk.main.Error: Error: return code 1
Oct 24 19:32:04 node002 systemd[1]: ceph-disk@dev-sde1.service: Main process exited, code=exited, status=1/FAILURE
Oct 24 19:32:04 node002 systemd[1]: ceph-disk@dev-sde1.service: Failed with result 'exit-code'.
Oct 24 19:32:04 node002 systemd[1]: Failed to start Ceph disk activation: /dev/sde1.
Thanks,
Tom
I am currently upgrading Proxmox to version 6 (running corosync version 3 now). I did NOT update Ceph yet. But after upgrading the first node I get this Ceph error, but everything is still up except these osd's:
systemctl status ceph-disk@dev-sdb1.service
● ceph-disk@dev-sdb1.service - Ceph disk activation: /dev/sdb1
Loaded: loaded (/lib/systemd/system/ceph-disk@.service; static; vendor preset: enabled)
Drop-In: /lib/systemd/system/ceph-disk@.service.d
└─ceph-after-pve-cluster.conf
Active: inactive (dead)
=====
systemctl status ceph-disk@dev-sde1.service
● ceph-disk@dev-sde1.service - Ceph disk activation: /dev/sde1
Loaded: loaded (/lib/systemd/system/ceph-disk@.service; static; vendor preset: enabled)
Drop-In: /lib/systemd/system/ceph-disk@.service.d
└─ceph-after-pve-cluster.conf
Active: failed (Result: exit-code) since Thu 2019-10-24 19:32:04 CEST; 34min ago
Main PID: 3600 (code=exited, status=1/FAILURE)
Oct 24 19:32:04 node002 sh[3600]: File "/usr/lib/python2.7/dist-packages/ceph_disk/main.py", line 5736, in run
Oct 24 19:32:04 node002 sh[3600]: main(sys.argv[1:])
Oct 24 19:32:04 node002 sh[3600]: File "/usr/lib/python2.7/dist-packages/ceph_disk/main.py", line 5687, in main
Oct 24 19:32:04 node002 sh[3600]: args.func(args)
Oct 24 19:32:04 node002 sh[3600]: File "/usr/lib/python2.7/dist-packages/ceph_disk/main.py", line 4890, in main_trigger
Oct 24 19:32:04 node002 sh[3600]: raise Error('return code ' + str(ret))
Oct 24 19:32:04 node002 sh[3600]: ceph_disk.main.Error: Error: return code 1
Oct 24 19:32:04 node002 systemd[1]: ceph-disk@dev-sde1.service: Main process exited, code=exited, status=1/FAILURE
Oct 24 19:32:04 node002 systemd[1]: ceph-disk@dev-sde1.service: Failed with result 'exit-code'.
Oct 24 19:32:04 node002 systemd[1]: Failed to start Ceph disk activation: /dev/sde1.
Thanks,
Tom
Last edited: