mount error: exit code 16 (500) on cephfs mount

ronweister

New Member
Mar 23, 2018
14
1
3
29
mount error: exit code 16 (500)

getting this error on external cephfs mount with ec datapool

Can anyone help?
 
I'm facing the same issue with the latest 5.4

syslog
Code:
May  7 14:44:27 pve-node4 pvestatd[2269]: A filesystem is already mounted on /mnt/pve/cephfs-backup
May  7 14:44:27 pve-node4 pvestatd[2269]: mount error: exit code 16
May  7 14:44:37 pve-node4 pvestatd[2269]: A filesystem is already mounted on /mnt/pve/cephfs-backup
May  7 14:44:37 pve-node4 pvestatd[2269]: mount error: exit code 16

Code:
root@pve-node4:~# df -h
Filesystem                                                                 Size  Used Avail Use% Mounted on
udev                                                                        63G     0   63G   0% /dev
tmpfs                                                                       13G  266M   13G   3% /run
/dev/mapper/pve-root                                                        15G  3.0G   11G  22% /
tmpfs                                                                       63G   63M   63G   1% /dev/shm
tmpfs                                                                      5.0M     0  5.0M   0% /run/lock
tmpfs                                                                       63G     0   63G   0% /sys/fs/cgroup
/dev/fuse                                                                   30M   44K   30M   1% /etc/pve
/dev/sde1                                                                   94M  5.5M   89M   6% /var/lib/ceph/osd/ceph-28
/dev/sdf1                                                                   94M  5.5M   89M   6% /var/lib/ceph/osd/ceph-29
/dev/sdd1                                                                   94M  5.5M   89M   6% /var/lib/ceph/osd/ceph-27
10.10.10.101:6789,10.10.10.102:6789,10.10.10.103:6789,10.10.10.104:6789:/  7.2T  4.2T  3.0T  59% /mnt/pve/cephfs-backup
/dev/sda1                                                                   94M  5.5M   89M   6% /var/lib/ceph/osd/ceph-30
/dev/sdb1                                                                   94M  5.5M   89M   6% /var/lib/ceph/osd/ceph-31
/dev/sdc1                                                                   94M  5.5M   89M   6% /var/lib/ceph/osd/ceph-32
tmpfs                                                                       13G     0   13G   0% /run/user/0

If I "cd" to "/mnt/pve/cephfs-backup" I can see all the data (so this resource is already mounted but not recognized by pvestat

Any ideas?
 
How does your storage.cfg look like? And did you try to unmount the storage to see if it gets mounted by itself?
 
Code:
root@pve-node2:~# cat /etc/pve/storage.cfg 
dir: local
        disable
        path /var/lib/vz
        content images
        maxfiles 0
        shared 0

zfspool: local-zfs
        disable
        pool rpool/data
        blocksize 8k
        content images
        nodes pve-node2,pve-node3,pve-node1
        sparse 0

zfspool: data-zfs
        disable
        pool tank/data
        blocksize 8k
        content images,rootdir
        nodes pve-node2,pve-node3,pve-node1
        sparse 0

rbd: rbd
        content images
        krbd 0
        pool rbd

cephfs: cephfs-backup
        path /mnt/pve/cephfs-backup
        content backup,vztmpl,iso
        maxfiles 3
 
How does your storage.cfg look like? And did you try to unmount the storage to see if it gets mounted by itself?

unmounting "/mnt/pve/cephfs-backup" works (temporary) for all nodes but one.

Code:
root@pve-node3:~# umount /mnt/pve/cephfs-backup
umount: /mnt/pve/cephfs-backup: target is busy
        (In some cases useful info about processes that
         use the device is found by lsof(8) or fuser(1).)

However, in several hours an error comes back on all other nodes
 
On node3 something is still accessing the mount, could you find out what it is? And can you please post a 'pveversion -v'?
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!