After getting the node up and running again (https://forum.proxmox.com/threads/i...it-was-actually-noapic-that-was-needed.43720/) I now have a problem that has been reported a few time elsewhere, but not in the systemd based version of Proxmox. After a reboot none of the 9 OSD's on this server start.
If I mount the OSD manually like below, it starts:
# mount -o "rw,noatime,attr2,inode64,noquota" /dev/cciss/c0d7p1 /var/lib/ceph/osd/ceph-7
# ceph-disk trigger /dev/cciss/c0d7p2
However, I can't find out how to coax systemd into doing this at startup. Can someone help me please?
I'm focussing only in one OSD here and will apply the solution to all the other's
If I mount the OSD manually like below, it starts:
# mount -o "rw,noatime,attr2,inode64,noquota" /dev/cciss/c0d7p1 /var/lib/ceph/osd/ceph-7
# ceph-disk trigger /dev/cciss/c0d7p2
However, I can't find out how to coax systemd into doing this at startup. Can someone help me please?
Code:
# pveversion -v
proxmox-ve: 4.4-111 (running kernel: 4.4.128-1-pve)
pve-manager: 4.4-24 (running version: 4.4-24/08ba4d2d)
pve-kernel-4.4.35-1-pve: 4.4.35-77
pve-kernel-4.4.117-2-pve: 4.4.117-110
pve-kernel-4.4.128-1-pve: 4.4.128-111
lvm2: 2.02.116-pve3
corosync-pve: 2.4.2-2~pve4+2
libqb0: 1.0.1-1
pve-cluster: 4.0-55
qemu-server: 4.0-115
pve-firmware: 1.1-12
libpve-common-perl: 4.0-96
libpve-access-control: 4.0-23
libpve-storage-perl: 4.0-76
pve-libspice-server1: 0.12.8-2
vncterm: 1.3-2
pve-docs: 4.4-4
pve-qemu-kvm: 2.9.1-9~pve4
pve-container: 1.0-106
pve-firewall: 2.0-33
pve-ha-manager: 1.0-41
ksm-control-daemon: 1.2-1
glusterfs-client: 3.5.2-2+deb8u3
lxc-pve: 2.0.7-4
lxcfs: 2.0.8-2~pve4
criu: 1.6.0-1
novnc-pve: 0.5-9
smartmontools: 6.5+svn4324-1~pve80
zfsutils: 0.6.5.9-pve15~bpo80
ceph: 10.2.10-1~bpo80+1
Code:
# pve-ceph list
...
/dev/cciss/c0d7 :
/dev/cciss/c0d7p2 ceph journal, for /dev/cciss/c0d7p1
/dev/cciss/c0d7p1 ceph data, active, unknown cluster a6092407-216f-41ff-bccb-9bed78587ac3, osd.7, journal /dev/cciss/c0d7p2
...