Best way to access CephFS from within VM (high perf)

DynFi User

Active Member
Apr 18, 2016
132
14
38
47
dynfi.com
We have a large 4 node cluster with about 419To split in two main pools, one for NVMe based disks and another one for SSD.

We are planning to use the NVMe RBD to store our VMs and the other pool to store shared data.
Shared data will be very voluminous and with +100 millions of files.

Beside the security issues which have been discussed in this thread, I would like to have your point of view on :

  • which method would you recommend to access these files within the VMs ?
    • bind mount points ? (not so sure about perf… ?)
    • Direct CephFS access from within the VMs (with access to the Ceph public network)
    • NFS with the already discussed in previous thread NFS-ganesha (which seems to lack
    • FUSE mounting
    • yet another solution…

Considering the volume of data and the number of files to be shared / accessed through this method, we need something:
  • robust enough not to fail upon high load
  • shared among all nodes
  • with the ability to follow-up the high availability standards set by CEPH

Thanks for your support and help.
 
Last edited:

DynFi User

Active Member
Apr 18, 2016
132
14
38
47
dynfi.com
No one to advise a bit here ?
 

ph0x

Renowned Member
Jul 5, 2020
1,326
212
68
/dev/null
I did it with the second option. Created a bridge for Ceph public (it's actually a VLAN in vmbr0) and assigned a NIC to the VMs that need to access the CephFS.
If this meets all your robustness requirements in production, I don't know, but for me it had been working solidly.
 
  • Like
Reactions: DynFi User

DynFi User

Active Member
Apr 18, 2016
132
14
38
47
dynfi.com
I did it with the second option. Created a bridge for Ceph public (it's actually a VLAN in vmbr0) and assigned a NIC to the VMs that need to access the CephFS.
If this meets all your robustness requirements in production, I don't know, but for me it had been working solidly.
Thanks for your feedback. This is very appreciated.

I'll dig further in this direction and make some tests.

I have almost finished the 'NFS-Ganesha' which is up and running.
Nice thing about this is that It has no access to the Ceph Public Network from outside the Hypervisor (which is much better from security standpoint).

I'll need to benchmark both in order to see which one is more efficient.
 
  • Like
Reactions: spirit

ph0x

Renowned Member
Jul 5, 2020
1,326
212
68
/dev/null
Nice thing about this is that It has no access to the Ceph Public Network from outside the Hypervisor (which is much better from security standpoint).
It surely depends on your threat assessment but for me a VM running in PVE which serves CephFS shares is not more or less secure than the host serving NFS Ganesha in terms of Ceph security.
I need the shares in a heterogeneous environment and therefore went for CIFS, whereas a Linux-only environment will pretty much be better off with NFS.

I'll need to benchmark both in order to see which one is more efficient.
It would be appreciated if you shared your insights afterwards!
 

DynFi User

Active Member
Apr 18, 2016
132
14
38
47
dynfi.com
@ph0x Thanks for your infos.

I am fighting a bit with the FUSE access from VM into CEPHFS.

The command referenced in the documentation didn't seem to make it through.

I am having a hard time finding documentation on Proxmox for these FUSE mount.
It seems like a taboo subject or a function that's half supported.

If you have something that looks like a rough guide, or some CLI command that you are executing to mount the FS through FUSE, I'll be very interested.
 

dlasher

Active Member
Mar 23, 2011
137
11
38
I've had good lucking using bind mounts in lxc containers, performance is really solid, assuming you have high performance disks.
 

dlasher

Active Member
Mar 23, 2011
137
11
38
I am seeing an issue however, with CEPHFS performance in VM's, when one of the "mounted" IP's is down, for example:

Code:
198.18.53.101,198.18.53.102,198.18.53.103,198.18.53.104,198.18.53.105:/  /mnt/pve/cephfs

when .103 was offline for a while today (crashed) VM's using things mounted in that path, on the OTHER hosts, couldn't write. I'm guessing this is a setting I need to tweak in the pool?
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get your own in 60 seconds.

Buy now!