Hi!
My configuration (before upgrade to proxmox 5 on debian stretch):
- 3 proxmox nodes running Debian jessie
- proxmox installed on top of Debian jessie
- 2 hard drives per nodes as OSDs = total of 6 OSDs
Today we upgraded our "proxmox 4 + ceph hammer" to "proxmox 5 + ceph luminous" following this guide upgrade from 4.x to 5.x (inplace upgrade)
Everything went perfect but when any node is rebooted:
1/ the ceph main service does not start:
# systemctl status ceph
● ceph.service - PVE activate Ceph OSD disks
Loaded: loaded (/etc/systemd/system/ceph.service; enabled; vendor preset: enabled)
Active: inactive (dead) since Fri 2019-12-27 15:50:49 EAT; 1h 5min ago
Process: 9590 ExecStart=/usr/sbin/ceph-disk --log-stdout activate-all (code=exited, status=0/SUCCESS)
Main PID: 9590 (code=exited, status=0/SUCCESS)
CPU: 179ms
déc. 27 15:50:49 srv-virt-3 systemd[1]: Starting PVE activate Ceph OSD disks...
déc. 27 15:50:49 srv-virt-3 systemd[1]: Started PVE activate Ceph OSD disks.
2/ the ceph OSD are not mounted
3/ ceph cluster goes HEALTH_WARNING
Looks like the issue comes from our ceph main service (availaible in /etc/init.d/ceph or via systemctl) since it does all the mounting and starts all ceph subservices. What went wrong?
We could easily fix it by manually do the mounting and by starting everything manually but is there better way to solve this ?
My configuration (before upgrade to proxmox 5 on debian stretch):
- 3 proxmox nodes running Debian jessie
- proxmox installed on top of Debian jessie
- 2 hard drives per nodes as OSDs = total of 6 OSDs
Today we upgraded our "proxmox 4 + ceph hammer" to "proxmox 5 + ceph luminous" following this guide upgrade from 4.x to 5.x (inplace upgrade)
Everything went perfect but when any node is rebooted:
1/ the ceph main service does not start:
# systemctl status ceph
● ceph.service - PVE activate Ceph OSD disks
Loaded: loaded (/etc/systemd/system/ceph.service; enabled; vendor preset: enabled)
Active: inactive (dead) since Fri 2019-12-27 15:50:49 EAT; 1h 5min ago
Process: 9590 ExecStart=/usr/sbin/ceph-disk --log-stdout activate-all (code=exited, status=0/SUCCESS)
Main PID: 9590 (code=exited, status=0/SUCCESS)
CPU: 179ms
déc. 27 15:50:49 srv-virt-3 systemd[1]: Starting PVE activate Ceph OSD disks...
déc. 27 15:50:49 srv-virt-3 systemd[1]: Started PVE activate Ceph OSD disks.
2/ the ceph OSD are not mounted
3/ ceph cluster goes HEALTH_WARNING
Looks like the issue comes from our ceph main service (availaible in /etc/init.d/ceph or via systemctl) since it does all the mounting and starts all ceph subservices. What went wrong?
We could easily fix it by manually do the mounting and by starting everything manually but is there better way to solve this ?