PVE 6.2 - CephFS - Problems mounting /var/lib/vz via fstab

Jun 8, 2016
344
75
93
48
Johannesburg, South Africa
We really appreciate the flexibility Ceph provides and typically setup our clusters to use sparse RBD images with templates residing in a Ceph file system concurrently mounted on all our nodes.

Since PVE 6.2 we are unable to mount CephFS via fstab as it says 'nonempty' is an unknown parameter, the parameter however works via CLI:

Entry in fstab that generates the error:
Code:
id=admin,conf=/etc/ceph/ceph.conf /var/lib/vz fuse.ceph defaults,_netdev,noauto,nonempty,x-systemd.requires=ceph.target 0 0

Error:
Code:
May 17 15:32:57 kvm1b kernel: [327894.914577] fuse: Unknown parameter 'nonempty'


If we remove ',nonempty' from /etc/fstab we can then mount it via CLI:
Code:
[admin@kvm1b ~]# mount -o nonempty /var/lib/vz
2020-05-17 15:32:32.094 7f477ccf3f40 -1 init, newargv = 0x559af240c4f0 newargc=9ceph-fuse[
2400850]: starting ceph client
ceph-fuse[2400850]: starting fuse



Herewith our reference notes to create a shared CephFS which is simultaneously mounted by all nodes. We primarily use this as a shared 'ISO' storage folder which allows migration of virtuals that have mapped ISO images:
Code:
CephFS:
  We co-locate CephFS metadata servers on monitor nodes:
    pico /etc/ceph/ceph.conf
      [mds]
           mds data = /var/lib/ceph/mds/$cluster-$id
      [mds.kvm1a]
           host = kvm1a
      [mds.kvm1b]
           host = kvm1b
      [mds.kvm1c]
           host = kvm1c
  On each node (kvm1a, kvm1b & kvm1c):
    id='kvm1a';
    apt-get -y install ceph-mds;
    mkdir -p /var/lib/ceph/mds/ceph-$id;
    ceph auth get-or-create mds.$id mds 'allow' osd 'allow *' mon 'allow rwx' > /var/lib/ceph/mds/ceph-$id/keyring;
    chown ceph.ceph /var/lib/ceph/mds -R;
    systemctl enable ceph-mds@$id;
    systemctl start ceph-mds@$id;
    systemctl status ceph-mds@$id;
  Create CephFS (on any of the metadata nodes):
    ceph osd pool create cephfs_data 16;
    ceph osd pool create cephfs_metadata 16;
    ceph fs new cephfs cephfs_metadata cephfs_data;
  On all PVE nodes:
    pico /etc/fstab;
      id=admin,conf=/etc/ceph/ceph.conf /var/lib/vz fuse.ceph defaults,_netdev,noauto,nonempty,x-systemd.requires=ceph.target 0 0
    ceph fs status
 
/var/lib/vz is the default storage directory and will be used in many cases as default. It is not recommended to mount a network storage over it.

Since PVE 6.2 we are unable to mount CephFS via fstab as it says 'nonempty' is an unknown parameter, the parameter however works via CLI:
The directory is not empty as by default the common sub-directories are created. And with the fstab the Proxmox VE stack is not aware if it is mounted.

Best use the default storage location /mnt/pve/<storage-id> for CephFS. The CephFS storage plugin will take care that it is mounted and knows in which state it is in.

Herewith our reference notes to create a shared CephFS which is simultaneously mounted by all nodes. We primarily use this as a shared 'ISO' storage folder which allows migration of virtuals that have mapped ISO images:
The CephFS storage plugin provides the same feature.
https://pve.proxmox.com/pve-docs/chapter-pvesm.html#storage_cephfs