Proxmox cluster with shared Ceph partition in VMs

scuppasteve

New Member
Mar 14, 2025
7
0
1
I have two ceph pools on my data center mounted to the following:
  • ISOs = /mnt/pve/ISOs
  • cephfs = /mnt/pve/cephfs
I was finally able to get a ceph partition mounted in the VM via

Code:
mount -t ceph admin@.cephfs=/ /mnt/cephfs

When i run that command it mounted the ISOs dir, I wanted to mount the cephfs pool. Any idea where to go next or why that was the default mount.

Same situation when mounted via fstab.

Code:
192.168.250.111:6789,192.168.250.121:6789,192.168.250.131:6789:/     /mnt/cephfs    ceph    <br>name=admin,secret=xxx,noatime,_netdev    0       2
Not sure what else to post that will help solve this.
 
What’s your CephFS configuration look like. It simply looks like you shared the wrong pool or gave that user access to the wrong pool. Or perhaps ISOs is just a subfolder on cephfs (which is the default pool name when you create the first one).

You probably shouldn’t use an admin user either for clients. I would suggest creating a mount or client specific keyring.
 
Are these two separate CephFS volumes?

What does "ceph fs status" show?

If that's the case you need to specify the fs name with the fs= option to the mount command.

Code:
root@optiswarm01:/# ceph fs status
ISOs - 5 clients
====
RANK  STATE       MDS         ACTIVITY     DNS    INOS   DIRS   CAPS 
 0    active  optiswarm03  Reqs:    0 /s    24     21     16     19   
     POOL        TYPE     USED  AVAIL 
ISOs_metadata  metadata  1835k  4792G 
  ISOs_data      data    20.3G  4792G 
cephfs - 4 clients
======
RANK  STATE       MDS         ACTIVITY     DNS    INOS   DIRS   CAPS 
 0    active  optiswarm01  Reqs:    0 /s    17     20     18     22   
      POOL         TYPE     USED  AVAIL 
cephfs_metadata  metadata   181k  4792G 
  cephfs_data      data    12.0k  4792G 
STANDBY MDS 
optiswarm05 
MDS version: ceph version 18.2.4 (2064df84afc61c7e63928121bfdd74c59453c893) reef (stable)

According to Mounting CephFS, i have specified the filesystem name in the command. That is what is in the mount command "mount -t ceph admin@.cephfs=/ /mnt/cephfs" is calling out, it doesn't have the fsid because ceph.helper takes care of that.
 
What’s your CephFS configuration look like. It simply looks like you shared the wrong pool or gave that user access to the wrong pool. Or perhaps ISOs is just a subfolder on cephfs (which is the default pool name when you create the first one).

You probably shouldn’t use an admin user either for clients. I would suggest creating a mount or client specific keyring.

/etc/ceph/ceph.conf
Code:
[global]
        auth_client_required = cephx
        auth_cluster_required = cephx
        auth_service_required = cephx
        cluster_network = 192.168.251.111/24
        fsid = 13aba2f1-7385-4c74-bae5-f9e00a523604
        mon_allow_pool_delete = true
        mon_host = 192.168.250.111 192.168.250.121 192.168.250.131
        ms_bind_ipv4 = true
        ms_bind_ipv6 = false
        osd_pool_default_min_size = 2
        osd_pool_default_size = 3
        public_network = 192.168.250.111/24

[client]
        keyring = /etc/pve/priv/$cluster.$name.keyring

[client.crash]
        keyring = /etc/pve/ceph/$cluster.$name.keyring

[mds]
        keyring = /var/lib/ceph/mds/ceph-$id/keyring

[mds.optiswarm01]
        host = optiswarm01
        mds_standby_for_name = pve

[mds.optiswarm03]
        host = optiswarm03
        mds_standby_for_name = pve

[mds.optiswarm05]
        host = optiswarm05
        mds_standby_for_name = pve

[mon.optiswarm01]
        public_addr = 192.168.250.111

[mon.optiswarm02]
        public_addr = 192.168.250.121

[mon.optiswarm03]
        public_addr = 192.168.250.131

as you can see in my reply to the other poster, they are separate independent pools. I understand your advice this is my demo install so i can record the entire workflow and save it for when i break things in the future and can't remember how to fix things. Ill use a normal user in the future.
 
If you use the mount command manually, what does “‘mount” show afterwards? Are you sure you’re not looking at a previous mount? Do you use any helper, config or keyring, are there any files that define a different pool?

It’s really hard to troubleshoot without knowing details and states. Ceph.conf is largely irrelevant to CephFS mount. CephFS can’t randomly mount a different pool than you have specified, somewhere that information is passed along to your mount command, the question is where. Your third party client can be set up in a million ways, what instructions did you follow? Can you compare the state of CephFS mounts on PVE with the CephFS mount on your third party client (file listings etc) and make sure post-mount, the state looks the same.
 
Last edited:
If you use the mount command manually, what does “‘mount” show afterwards? Are you sure you’re not looking at a previous mount? Do you use any helper, config or keyring, are there any files that define a different pool?

It’s really hard to troubleshoot without knowing details and states. Ceph.conf is largely irrelevant to CephFS mount. CephFS can’t randomly mount a different pool than you have specified, somewhere that information is passed along to your mount command, the question is where. Your third party client can be set up in a million ways, what instructions did you follow? Can you compare the state of CephFS mounts on PVE with the CephFS mount on your third party client (file listings etc) and make sure post-mount, the state looks the same.
The steps i followed for setting up the Client and Nodes.

I have verified the mount is active and is the ISOs pool, through file creation and validating on the proxmox nodes console. My question is what should the mount location be for the pool, in the command i used it was "=/" but what would you expect it to be? I have never passed a share with Proxmox.

Code:
mount -t ceph admin@.cephfs=/mnt/pve/cephfs/ /mnt/cephfs
mount -t ceph admin@.cephfs=/pve/cephfs/ /mnt/cephfs
mount -t ceph admin@.cephfs=/cephfs/ /mnt/cephfs
mount -t ceph admin@.cephfs=/ /mnt/cephfs
 
I would expect the root of the CephFS pool to be / - where it is mounted on the PVE host is irrelevant (it could not be mounted at all)

However, if you followed some of the instructions you linked to, then you generated a keyring for and copied it to the client, that keyring specifies which pool(s) your user has access to. Given you don’t pass all the information, a helper goes find that information, the helper probably runs into the ‘first’ one and uses that information.

Does it list both pools or are you using separate credentials for each pool? Is your guest CephFS also current because multiple CephFS pools on the same cluster wasn’t a thing before reef(?), so potentially your CephFS client may not ‘understand’ the syntax of specifying a pool like that.