[SOLVED] Pve 8 - CephFS not mounting

Hi,

After upgrade to 8, I stopped and restarted a cluster (3 nodes) with both a Ceph cluster (for VMs) and a CephFS cluster (for Backups).
It seems the CephFs fails to mount, with followig lines in the syslog:

Code:
Jul 03 15:42:13 pvenode1 systemd[1]: /lib/systemd/system/ceph-volume@.service:8: Unit uses KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update the service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed.
Jul 03 15:42:13 pvenode1 systemd[1]: /lib/systemd/system/ceph-volume@.service:8: Unit uses KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update the service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed.
Jul 03 15:42:13 pvenode1 systemd[1]: /lib/systemd/system/ceph-volume@.service:8: Unit uses KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update the service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed.
Jul 03 15:42:13 pvenode1 systemd[1]: /lib/systemd/system/ceph-volume@.service:8: Unit uses KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update the service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed.
Jul 03 15:42:13 pvenode1 systemd[1]: /lib/systemd/system/ceph-volume@.service:8: Unit uses KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update the service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed.
Jul 03 15:42:13 pvenode1 systemd[1]: mnt-pve-Cephfs_SAS900MB.mount: Failed with result 'exit-code'.
Jul 03 15:42:13 pvenode1 systemd[1]: Failed to mount mnt-pve-Cephfs_SAS900MB.mount - /mnt/pve/Cephfs_SAS900MB.
Jul 03 15:42:13 pvenode1 pvestatd[7496]: mount error: Job failed. See "journalctl -xe" for details.

And the following outputs from the cli:

Code:
root@pvenode1:~# journalctl -xe
Jul 03 15:44:13 pvenode1 pvestatd[7496]: mount error: Job failed. See "journalctl -xe" for details.
Jul 03 15:44:23 pvenode1 systemd[1]: /lib/systemd/system/ceph-volume@.service:8: Unit uses KillMode=none. This is unsafe, as it disables systemd's >
Jul 03 15:44:23 pvenode1 systemd[1]: /lib/systemd/system/ceph-volume@.service:8: Unit uses KillMode=none. This is unsafe, as it disables systemd's >
Jul 03 15:44:23 pvenode1 systemd[1]: /lib/systemd/system/ceph-volume@.service:8: Unit uses KillMode=none. This is unsafe, as it disables systemd's >
Jul 03 15:44:23 pvenode1 systemd[1]: /lib/systemd/system/ceph-volume@.service:8: Unit uses KillMode=none. This is unsafe, as it disables systemd's >
Jul 03 15:44:23 pvenode1 systemd[1]: /lib/systemd/system/ceph-volume@.service:8: Unit uses KillMode=none. This is unsafe, as it disables systemd's >
Jul 03 15:44:23 pvenode1 systemd[1]: mnt-pve-Cephfs_SAS900MB.mount: Failed with result 'exit-code'.
░░ Subject: Unit failed
░░ Defined-By: systemd
░░ Support: https://www.debian.org/support
░░
░░ The unit mnt-pve-Cephfs_SAS900MB.mount has entered the 'failed' state with result 'exit-code'.
Jul 03 15:44:23 pvenode1 systemd[1]: Failed to mount mnt-pve-Cephfs_SAS900MB.mount - /mnt/pve/Cephfs_SAS900MB.
░░ Subject: A start job for unit mnt-pve-Cephfs_SAS900MB.mount has failed
░░ Defined-By: systemd
░░ Support: https://www.debian.org/support
░░
░░ A start job for unit mnt-pve-Cephfs_SAS900MB.mount has finished with a failure.
░░
░░ The job identifier is 115473 and the job result is failed.
Jul 03 15:44:23 pvenode1 pvestatd[7496]: mount error: Job failed. See "journalctl -xe" for details.

Code:
root@pvenode1:~# systemctl cat mnt-pve-Cephfs_SAS900MB.mount
# /run/systemd/system/mnt-pve-Cephfs_SAS900MB.mount
[Unit]
Description=/mnt/pve/Cephfs_SAS900MB
DefaultDependencies=no
Requires=system.slice
Wants=network-online.target
Before=umount.target remote-fs.target
After=systemd-journald.socket system.slice network.target -.mount remote-fs-pre.target network-online.target
Conflicts=umount.target

[Mount]
Where=/mnt/pve/Cephfs_SAS900MB
What=10.0.40.11,10.0.40.12,10.0.40.13:/
Type=ceph
Options=name=admin,secretfile=/etc/pve/priv/ceph/Cephfs_SAS900MB.secret,conf=/etc/pve/ceph.conf,fs=Cephfs_SAS900MB

Ceph results being up and all green, all monitors, managers and metadataservers up and running.

How can I debug/solve this?
Thank you.
 
Hey,

could you post the output of ceph fs status and try mounting it manually with
Code:
mount -v -t ceph 10.0.40.11,10.0.40.12,10.0.40.13:/ /mnt/pve/Cephfs_SAS900MB -o name=admin,secretfile=/etc/pve/priv/ceph/Cephfs_SAS900MB.secret,conf=/etc/pve/ceph.conf,fs=Cephfs_SAS900MB
Mounting it manually might give a more detailed explanation of what exactly went wrong.
 
Hi,

Thank you very much for your feedback, just restarting the mds's fixed it.

I probably think that the mds's started without the monitors up (I had to start them manually), and then needed to be restarted manually to properly manage the pool and the mount.
At least, this is the chronology of what happened and what I did to solve it.

Thank you again, I can't give more feedback/insight as I do not have the problem at hand any more.
Let's see next time I'll stop and restart the cluster if it keeps reappearing.
 
  • Like
Reactions: Hannes Laimer
I get the Support for KillMode=none is deprecated and will eventually be removed. issue when i reboot a node and for some reason it can't contact the other ceph nodes.

In general is there something we should be changing if Kill Mode=none is unsafe?
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!