Support for CephFS via NFS (NFS-Ganesha)

DynFi User

Active Member
Apr 18, 2016
132
14
38
47
dynfi.com
We are setting up a large scale Promox-VE CEPH cluster with 4 nodes.
Most of our setup is now completed, cluster is up and running. This cluster will be using BGP based L2VPN-EVPN + VxLAN and might be multi-site.

We need to provide file level access to some VM in the Cluster and first thought using CephFS to do that.
But this seems like a "not so good idea" since VM would need to access the CephFS directly and we would need to expose CEPH services which should remain hidden for obvious security reasons.

We have many network in our configuration and exposing the CEPH public network to the VMs is not part of what we expect to do.

After some more search, I bumped into the NFS-Ganesha project which is well described here :

While this seems to be an interesting way to solve the access problem while still providing file level shares to some VMs, I have found no documentation related to this setup in Proxmox-VE.

So my questions are the following :
  1. Can NFS-Ganesha be safely deployed inside a Proxmox-VE CEPH cluster ?
  2. Is there any risk tied to the deployment of this techno inside the cluster ?
  3. Do you plan to add some support to this techno sometimes in the future ?
  4. What would be your advised way of solving our "file level access" to the CEPH Cluster ?


Thanks for your help and support.
G.B.
 

DynFi User

Active Member
Apr 18, 2016
132
14
38
47
dynfi.com
Can anyone @proxmox let me know if It is safe to install and try to anwer the questions before I dig further into this ?

Thanks
 

aaron

Proxmox Staff Member
Staff member
Jun 3, 2019
3,062
512
118
I haven't used the CephFS Ganesha implementation yet, therefore no experience but took a quick look at the docs. AFAICT one of the nodes will be the NFS server? Or you have some other host that is a client to the Ceph cluster and provides the NFS server (NFS-Ganesha server host connected to the Ceph public network).

If you install that on PVE directly, would you install it on each node which would leave you with 4 NFS servers? Unless you use some kind of floating IP?

Of course, I might understand the situation wrong here, but what about having one more VM that does play file server? You can live migrate it between nodes and should the node on which it is running fail, you can use the PVE HA Stack to get it back up running within a few minutes.
 

DynFi User

Active Member
Apr 18, 2016
132
14
38
47
dynfi.com
I haven't used the CephFS Ganesha implementation yet, therefore no experience but took a quick look at the docs. AFAICT one of the nodes will be the NFS server? Or you have some other host that is a client to the Ceph cluster and provides the NFS server (NFS-Ganesha server host connected to the Ceph public network).
Yes, our first concern is that granting direct access to the CephFS will be made through the public CephFS which won't suit our requirements in terms of security.

So we thought that using this NFS-Ganesha might be the right path to take since only NFS nodes will connect to the CephFS.

We do not plan to have "other hosts" providing NFS services.
As you mentioned the options are the following:
  1. Use a local NFS-Ganesha on each Cluster node
  2. Use a VM and host the NFS-Ganesha on it

There seem to be examples provided with both.

If you install that on PVE directly, would you install it on each node which would leave you with 4 NFS servers? Unless you use some kind of floating IP?
Yes probably or at minimum two or three nodes.
It seems that the NFS-Ganesha has the ability to auto-recover from any member loss in 90" which is quite nice.

I don't plan to use "floatin IP" since it doesn't look like it is necessary.

Of course, I might understand the situation wrong here, but what about having one more VM that does play file server? You can live migrate it between nodes and should the node on which it is running fail, you can use the PVE HA Stack to get it back up running within a few minutes.
This is also an option.

But I am not sure how the NFS stack is going to react in the event of a node failure.
 

dcsapak

Proxmox Staff Member
Staff member
Feb 1, 2016
8,501
1,095
164
34
Vienna
just fyi, from the help page it looks like that the ceph nfs integration needs a ceph orchestrator to be set (rook/cephadm) which are not really supported from our side AFAIK
 

DynFi User

Active Member
Apr 18, 2016
132
14
38
47
dynfi.com
just fyi, from the help page it looks like that the ceph nfs integration needs a ceph orchestrator to be set (rook/cephadm) which are not really supported from our side AFAIK
So you think that using NFS-Ganesha in Proxmox might not be possible ?
 

dcsapak

Proxmox Staff Member
Staff member
Feb 1, 2016
8,501
1,095
164
34
Vienna
So you think that using NFS-Ganesha in Proxmox might not be possible ?
no i mean that the ceph nfs ganesha integration probably won't work, but i did not try it (i have no idea how rook or cephadmin would interact with a pve system)
 

DynFi User

Active Member
Apr 18, 2016
132
14
38
47
dynfi.com
Do you have any other solution in order to allow VM to use some CephFS storage, but not through a direct CephFS link since this will go through public Ceph network which is not in line with strong security policy ?
 

DynFi User

Active Member
Apr 18, 2016
132
14
38
47
dynfi.com
just fyi, from the help page it looks like that the ceph nfs integration needs a ceph orchestrator to be set (rook/cephadm) which are not really supported from our side AFAIK
Yes it looks like this is needed in order to balance the NFS cluster either as "active/active" nodes or as "active/passive".
https://www.youtube.com/watch?v=jppL51swnRo
easy peasy. make your ceph public traffic interface be a vm bridge and you can attach your vms to it.
From a security standpoint it is not ideal.
This is the reason why I wanted to explore other solutions.
 

DynFi User

Active Member
Apr 18, 2016
132
14
38
47
dynfi.com
Is active/active required for any reason?
It can be active/passive but It can't be active/nothing.

Remember that you can create other keys with limited access; you're not limited to the default ceph access keys.
Yes, but nonetheless if I opt for direct CephFS mounted inside VMs, they'll need an access to the Ceph public network.
It is not ideal since some data might be confidential or need to be contained in a private environment.

But I probably need to do more tests.

Your advise would be to go for CephFS ?
 

alexskysilk

Renowned Member
Oct 16, 2015
792
103
63
Chatsworth, CA
www.skysilk.com
I have absolutely no visibility on your use case; I am in no position to give you advice ;) I was only answering the question "Do you have any other solution in order to allow VM to use some CephFS storage"
 

Syrrys

Member
Nov 8, 2020
6
2
8
43
I just came across nfs-ganesha during a recent round of updates and find the idea of being able to provide cleaner access to cephfs to other clients through nfs rather than installing the full ceph system or relying on the ceph-dokan implementation to be very appealing.

Any further updates on getting rook or cephadm playing nice with proxmox?

or plans to integrate nfs-ganesha directly into proxmox? :)

Edit: I've just been following the man and am getting
Bash:
$ sudo ceph mgr module enable cephadm
Error ENOENT: all mgr daemons do not support module 'cephadm', pass --force to force enablement
$ sudo ceph mgr module enable rook
Error ENOENT: all mgr daemons do not support module 'rook', pass --force to force enablement
I don't know all the versions of the sub systems but I'm currently fully updated (I think) to Ceph 16.2.7 with PVE nodes on Kernel Version
Linux 5.15.35-1-pve #1 SMP PVE 5.15.35-3 (Wed, 11 May 2022 07:57:51 +0200) and pve-manager/7.2-4/ca9d43cc

Edit: (looks like there might be an issue with how those orchestrators use containerized services themselves?)
I might be totally misunderstanding orchestrators and nfs-ganesha...
Seems like I might be just as well off building a minimal lxc template that can serve nfs based on a bind mount from the hosts cephfs and hiding a few of them behind a load balancer if I need more HA... otherwise I foresee things getting a little matryoshka on me if I start jamming too many more layers of containerization into my storage design
 
Last edited:

fxandrei

Active Member
Jan 10, 2013
138
11
38
So im guessing there is no safe way to use nfs ganesha with ceph and proxmox right ?

Having the ability to expose nfs shares from a ceph fs directly from proxmox dashboard would be heaven :)
 

Toranaga

Active Member
Jun 17, 2017
60
9
28
54
Its really not difficult. Either install it manually (see https://docs.ceph.com/en/latest/cephfs/nfs/) or in a VM connected to the ceph public interface.
I tried this on the actual proxmox version with a direct installation.

Do you have examples for /etc/ganesha/ceph.conf and /etc/ganesha/ganesha.conf? VFS does not work, there is a lib error dlopen, but file exists.

An how is ganesha accessed? Cluster-IP, multipath mount?
 

alexskysilk

Renowned Member
Oct 16, 2015
792
103
63
Chatsworth, CA
www.skysilk.com
Well if you have a vm connected to cephfs and then expose it via nfs (from the vm) defeats the whole ideea.
How so? that's actually the preferred method as it will allow you to isolate the fileserver from your hypervisor; if you are not using proxmox for, well, proxmox- yes, by all means, use a different distro. remember, in this case both the proxmox node and vm act as guests for cephfs in the same manner.

Do you have examples for /etc/ganesha/ceph.conf and /etc/ganesha/ganesha.conf? VFS does not work, there is a lib error dlopen, but file exists.

An how is ganesha accessed? Cluster-IP, multipath mount?
I dont have means to troubleshoot your installation. As for examples- you should use whatever examples provided in the instruction set you were using but remember that those will need to be tailored to your environment.
An how is ganesha accessed? Cluster-IP, multipath mount?
Do you know how hard it is NOT to just post an lmgify link? I would advise to read and understand this: https://github.com/nfs-ganesha/nfs-ganesha/wiki
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get your own in 60 seconds.

Buy now!