Error adding (existing) CephFS

I got rid of the mount error by patching the perl script on all nodes (but I had limited the storage to only pve1 on my first try).
The mount succeeded and I can see the content of the cephfs in /mnt/pve/cephfs, but the webinterface and syslog errors are the same as before.
The error in the syslog is the same (exactly the same...) .
So the 'mount error: exit code 16' is still there but the 'Use of uninitialized value in sort' should be gone.

Are those perl scripts cached/precompiled (sorry, not really familiar with perl) or something like that? Do I have to reboot the nodes when I change something in the perl scripts?
The pvestatd needs to be restarted, as the files for it are compiled on start.
 
The pvestatd needs to be restarted, as the files for it are compiled on start
Yesterday I didn't see that the syslog entries were from pvedaemon, not pvestatd. So I restarted both after patching the perl script and now it works. CephFS is mounted and can be used via the webinterface.

@[URL='https://forum.proxmox.com/members/alwin.48816/']Alwin[/URL], I assume once the fix is released on your repos my quick and dirty fix (i only added the ".") will just be overwritten when I update and shouldn't interfere with your update, right?
 
Yesterday I didn't see that the syslog entries were from pvedaemon, not pvestatd. So I restarted both after patching the perl script and now it works. CephFS is mounted and can be used via the webinterface.
Glad that it works.

@Alwin, I assume once the fix is released on your repos my quick and dirty fix (i only added the ".") will just be overwritten when I update and shouldn't interfere with your update, right?
Yes.
 
I still have the error on a server with the latest updates installed:

Code:
proxmox-ve: 5.3-1 (running kernel: 4.15.18-10-pve)
pve-manager: 5.3-8 (running version: 5.3-8/2929af8e)
pve-kernel-4.15: 5.3-1
pve-kernel-4.15.18-10-pve: 4.15.18-31
pve-kernel-4.15.18-9-pve: 4.15.18-30
pve-kernel-4.15.18-8-pve: 4.15.18-28
corosync: 2.4.4-pve1
criu: 2.11.1-1~bpo90
glusterfs-client: 3.8.8-1
ksm-control-daemon: 1.2-2
libjs-extjs: 6.0.1-2
libpve-access-control: 5.1-3
libpve-apiclient-perl: 2.0-5
libpve-common-perl: 5.0-43
libpve-guest-common-perl: 2.0-19
libpve-http-server-perl: 2.0-11
libpve-storage-perl: 5.0-36
libqb0: 1.0.3-1~bpo9
lvm2: 2.02.168-pve6
lxc-pve: 3.1.0-1
lxcfs: 3.0.2-2
novnc-pve: 1.0.0-2
proxmox-widget-toolkit: 1.0-22
pve-cluster: 5.0-33
pve-container: 2.0-33
pve-docs: 5.3-1
pve-edk2-firmware: 1.20181023-1
pve-firewall: 3.0-17
pve-firmware: 2.0-6
pve-ha-manager: 2.0-6
pve-i18n: 1.0-9
pve-libspice-server1: 0.14.1-1
pve-qemu-kvm: 2.12.1-1
pve-xtermjs: 1.0-5
qemu-server: 5.0-44
smartmontools: 6.5+svn4324-1
spiceterm: 3.0-5
vncterm: 1.5-3
zfsutils-linux: 0.7.12-pve1~bpo1

This box does not have CephTools.pm any more.

Code:
# dpkg -S CephTools.pm
dpkg-query: no path found matching pattern *CephTools.pm*

And syslog contains errors:

Code:
Jan 16 17:25:06 dehmelt pvestatd[2414]: A filesystem is already mounted on /mnt/pve/cephfs
Jan 16 17:25:06 dehmelt pvestatd[2414]: mount error: exit code 16
Jan 16 17:25:16 dehmelt pvestatd[2414]: A filesystem is already mounted on /mnt/pve/cephfs
Jan 16 17:25:16 dehmelt pvestatd[2414]: mount error: exit code 16
 
It now works!

I had the following entry in /etc/pve/storage.cfg:
Code:
cephfs: cephfs
   path /mnt/pve/cephfs
   content iso,vztmpl,backup
   maxfiles 0
   monhost ceph01 ceph02 ceph03
   subdir /proxmox
   username admin

and the monhost line seems to be the culprit. After changing it to

Code:
cephfs: cephfs
   path /mnt/pve/cephfs
   content iso,vztmpl,backup
   maxfiles 0
   monhost 192.168.44.65:6789;192.168.44.67:6789;192.168.44.145:6789
   subdir /proxmox
   username admin

the CephFS gets mounted and displayed in the web GUI. The documentation on https://pve.proxmox.com/pve-docs/chapter-pvesm.html#storage_cephfs seems to suggest the first format for the mons. I just copied the line from the working RBD entry in storage.cfg.
 
If 'monhost' has names, then they need to be resolvable (hosts/dns).
 
I have the same situation - cephfs doesn't mount with hostname or FQDN in the storage.conf
pvedaemon[]: A filesystem is already mounted on /mnt/pve/cephfs
pvedaemon[]: mount error: exit code 16
pvestatd[]: A filesystem is already mounted on /mnt/pve/cephfs
pvestatd[]: mount error: exit code 16​
If I change to ip address, cephfs mounts. RBD works with hostname.
 
hello,

today i have the same issue:
i added the 4. node to our cluster and now the cephfs-Storage ist unusable.
In syslog i see the mounting errors:
pvestatd[3334]: A filesystem is already mounted on /mnt/pve/cephfs
pvestatd[3334]: mount error: exit code 16


all nodes become ceph-Monitors and CephFS-MDS

the cephfs-entry in /etc/pve/storage.cfg shows NO MonHost entry's!

I've just removed this storage via GUI, but i cannot add them again.

pveversion -v
proxmox-ve: 5.4-1 (running kernel: 4.15.18-14-pve)
pve-manager: 5.4-5 (running version: 5.4-5/c6fdb264)
pve-kernel-4.15: 5.4-2
pve-kernel-4.15.18-14-pve: 4.15.18-39
pve-kernel-4.15.18-13-pve: 4.15.18-37
pve-kernel-4.15.18-12-pve: 4.15.18-36
ceph: 12.2.12-pve1
corosync: 2.4.4-pve1
criu: 2.11.1-1~bpo90
glusterfs-client: 3.8.8-1
ksm-control-daemon: 1.2-2
libjs-extjs: 6.0.1-2
libpve-access-control: 5.1-9
libpve-apiclient-perl: 2.0-5
libpve-common-perl: 5.0-51
libpve-guest-common-perl: 2.0-20
libpve-http-server-perl: 2.0-13
libpve-storage-perl: 5.0-42
libqb0: 1.0.3-1~bpo9
lvm2: 2.02.168-pve6
lxc-pve: 3.1.0-3
lxcfs: 3.0.3-pve1
novnc-pve: 1.0.0-3
proxmox-widget-toolkit: 1.0-26
pve-cluster: 5.0-37
pve-container: 2.0-37
pve-docs: 5.4-2
pve-edk2-firmware: 1.20190312-1
pve-firewall: 3.0-20
pve-firmware: 2.0-6
pve-ha-manager: 2.0-9
pve-i18n: 1.1-4
pve-libspice-server1: 0.14.1-2
pve-qemu-kvm: 3.0.1-2
pve-xtermjs: 3.12.0-1
qemu-server: 5.0-51
smartmontools: 6.5+svn4324-1
spiceterm: 3.0-5
vncterm: 1.5-3
zfsutils-linux: 0.7.13-pve1~bpo2
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!