Hi,
After upgrade to 8, I stopped and restarted a cluster (3 nodes) with both a Ceph cluster (for VMs) and a CephFS cluster (for Backups).
It seems the CephFs fails to mount, with followig lines in the syslog:
And the following outputs from the cli:
Ceph results being up and all green, all monitors, managers and metadataservers up and running.
How can I debug/solve this?
Thank you.
After upgrade to 8, I stopped and restarted a cluster (3 nodes) with both a Ceph cluster (for VMs) and a CephFS cluster (for Backups).
It seems the CephFs fails to mount, with followig lines in the syslog:
Code:
Jul 03 15:42:13 pvenode1 systemd[1]: /lib/systemd/system/ceph-volume@.service:8: Unit uses KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update the service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed.
Jul 03 15:42:13 pvenode1 systemd[1]: /lib/systemd/system/ceph-volume@.service:8: Unit uses KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update the service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed.
Jul 03 15:42:13 pvenode1 systemd[1]: /lib/systemd/system/ceph-volume@.service:8: Unit uses KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update the service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed.
Jul 03 15:42:13 pvenode1 systemd[1]: /lib/systemd/system/ceph-volume@.service:8: Unit uses KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update the service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed.
Jul 03 15:42:13 pvenode1 systemd[1]: /lib/systemd/system/ceph-volume@.service:8: Unit uses KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update the service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed.
Jul 03 15:42:13 pvenode1 systemd[1]: mnt-pve-Cephfs_SAS900MB.mount: Failed with result 'exit-code'.
Jul 03 15:42:13 pvenode1 systemd[1]: Failed to mount mnt-pve-Cephfs_SAS900MB.mount - /mnt/pve/Cephfs_SAS900MB.
Jul 03 15:42:13 pvenode1 pvestatd[7496]: mount error: Job failed. See "journalctl -xe" for details.
And the following outputs from the cli:
Code:
root@pvenode1:~# journalctl -xe
Jul 03 15:44:13 pvenode1 pvestatd[7496]: mount error: Job failed. See "journalctl -xe" for details.
Jul 03 15:44:23 pvenode1 systemd[1]: /lib/systemd/system/ceph-volume@.service:8: Unit uses KillMode=none. This is unsafe, as it disables systemd's >
Jul 03 15:44:23 pvenode1 systemd[1]: /lib/systemd/system/ceph-volume@.service:8: Unit uses KillMode=none. This is unsafe, as it disables systemd's >
Jul 03 15:44:23 pvenode1 systemd[1]: /lib/systemd/system/ceph-volume@.service:8: Unit uses KillMode=none. This is unsafe, as it disables systemd's >
Jul 03 15:44:23 pvenode1 systemd[1]: /lib/systemd/system/ceph-volume@.service:8: Unit uses KillMode=none. This is unsafe, as it disables systemd's >
Jul 03 15:44:23 pvenode1 systemd[1]: /lib/systemd/system/ceph-volume@.service:8: Unit uses KillMode=none. This is unsafe, as it disables systemd's >
Jul 03 15:44:23 pvenode1 systemd[1]: mnt-pve-Cephfs_SAS900MB.mount: Failed with result 'exit-code'.
░░ Subject: Unit failed
░░ Defined-By: systemd
░░ Support: https://www.debian.org/support
░░
░░ The unit mnt-pve-Cephfs_SAS900MB.mount has entered the 'failed' state with result 'exit-code'.
Jul 03 15:44:23 pvenode1 systemd[1]: Failed to mount mnt-pve-Cephfs_SAS900MB.mount - /mnt/pve/Cephfs_SAS900MB.
░░ Subject: A start job for unit mnt-pve-Cephfs_SAS900MB.mount has failed
░░ Defined-By: systemd
░░ Support: https://www.debian.org/support
░░
░░ A start job for unit mnt-pve-Cephfs_SAS900MB.mount has finished with a failure.
░░
░░ The job identifier is 115473 and the job result is failed.
Jul 03 15:44:23 pvenode1 pvestatd[7496]: mount error: Job failed. See "journalctl -xe" for details.
Code:
root@pvenode1:~# systemctl cat mnt-pve-Cephfs_SAS900MB.mount
# /run/systemd/system/mnt-pve-Cephfs_SAS900MB.mount
[Unit]
Description=/mnt/pve/Cephfs_SAS900MB
DefaultDependencies=no
Requires=system.slice
Wants=network-online.target
Before=umount.target remote-fs.target
After=systemd-journald.socket system.slice network.target -.mount remote-fs-pre.target network-online.target
Conflicts=umount.target
[Mount]
Where=/mnt/pve/Cephfs_SAS900MB
What=10.0.40.11,10.0.40.12,10.0.40.13:/
Type=ceph
Options=name=admin,secretfile=/etc/pve/priv/ceph/Cephfs_SAS900MB.secret,conf=/etc/pve/ceph.conf,fs=Cephfs_SAS900MB
Ceph results being up and all green, all monitors, managers and metadataservers up and running.
How can I debug/solve this?
Thank you.