Ceph with NFS Export

carlosmp

Renowned Member
Jun 2, 2010
48
1
73
Since I posted the other day that we were going to attempt to setup the ceph/proxmox cluster and attempt to use NFS with ceph...has anyone tried this and succeeded/failed? I don't want to waste days/weeks trying to get this going if it's a fool's adventure and can avoid it.

TIA,

Carlos.
 
There should be no conceptual problem in doing such a thing. You have to be aware however, that this NFS system (baremetal/VM) will be a single point of failure if you do not plan on setting up a redundant NFS cluster. This would then introduce the problem that you need multiple RBDs for this cluster, each being stored 2-3 times on the ceph cluster (depending on your settings) so the net storage consumption on the Ceph side will ramp up very quickly.
 
Hello,

I have a Virtual Machine with its disk on a Ceph pool for NFS. I tested multiple VMs for redundancy, but what I have is stable enough for what I need NFS for. It works fairly well.
Are you running the "Ceph/RBD over NFS" on a proxmox 3.2 node ?
 
Since I posted the other day that we were going to attempt to setup the ceph/proxmox cluster and attempt to use NFS with ceph...has anyone tried this and succeeded/failed? I don't want to waste days/weeks trying to get this going if it's a fool's adventure and can avoid it.

I tried KVM ceph rbd and exported NFS mounts (>50TB) and did not like it. Disadvantages:
- no failover
- bad performance
- complicated capacity extension
- not scalable

Sure, you can work around each problem, but it takes some time and effort. Currently I'm using ceph-fs (single mds is stable) mounted on each proxmox host and exported via simfs to openvz containers. Works like a charm with infiniband at reasonable performance. Not a single outage encountered. Looking forward to giant release, 3.10 kernel and rdma :)
Disadvantage: no local linux page cache -> don't put hot data on cephfs


Regards, Patrick