CephFS share via NFS to VMware

Pravednik

Active Member
Sep 18, 2018
17
0
41
40
Hello,

Sorry in advance for dummy questions, but i can`t find in docs how to connect VMware 5.5 Hosts to CephFS via NFS. All docs goes to Ceph official site. As it is production cluster i`m afraid to do something very bad.

I have 10 nodes PVE cluster and Ceph cluster with 40 OSD SSD and 10 OSD SAS for Dev&Test.
I have 2 pools SSD-Pool (custom SSD_replication_rule) and SAS-Pool (custom SAS_repcication_rule).

I click Create CephFS, PVE create MDS server. Then I create 2 pools Ceph_FS_Data and also Ceph_FS_metadata pool. But both of them created with default replication_rule. I can`t understand on which OSD data will be located? OSD type in cluster is different (SSD, HDD)
Also I cant delete this pools mon_command failed - pool 'Ceph-SAS-FS_data' is in use by CephFS But it didn`t connected to PVE cluster.

However I can`t understand what to do next? Regarding Ceph documentation I need to configure nfs-ganesha to share CephFS via NFS. But it`s not part of PVE by default. Is it supported and I can install it as other package? If it does after installing and configuring I can connect VMware 5.5 hosts to NFS share via MDS?

I need to create 2 separate CephFS pools on SSD and SAS and place some VMs from VMware cluster on CephFS.

Sorry if my question quite messy, I`m first time facing with CephFS.

Thanks in advance for answers.
 
Sorry in advance for dummy questions, but i can`t find in docs how to connect VMware 5.5 Hosts to CephFS via NFS. All docs goes to Ceph official site. As it is production cluster i`m afraid to do something very bad.
This is a very specific setup. Most people use the iSCSI gateway directly through Ceph.

Then I create 2 pools Ceph_FS_Data and also Ceph_FS_metadata pool. But both of them created with default replication_rule. I can`t understand on which OSD data will be located?
The default rule is including all OSDs. You will need to set a rule manually for each pool.

However I can`t understand what to do next? Regarding Ceph documentation I need to configure nfs-ganesha to share CephFS via NFS. But it`s not part of PVE by default. Is it supported and I can install it as other package? If it does after installing and configuring I can connect VMware 5.5 hosts to NFS share via MDS?
The NFS ganesha is neither included nor supported by Proxmox. The NFS ganesha interfaces directly with Ceph and doesn't need any mounted filesystem for its exports. You will also want to have the NFS server redundant, in case the node with NFS dies. Anyhow, RBD would be the right storage for VM images. As you already run PVE, why not migrate the VMs and run them directly?
 
Thanks a lot for your time.

Step by step.

1. iSCSI gateways? You mean this one https://docs.ceph.com/docs/mimic/rbd/iscsi-targets/
Does Proxmox support it ok? I can use only software iSCSI. Some nodes cards cant handle HW iSCSI.

2. Ok, but I can't delete existing pool :( I'll try from CLI

3. We can't migrate all clients from VMware as we need backup VMs every night and we can't use agents. PVE does not support incremental backup (in your terminology differentials). This is only one thing, why we can't forget vmware :(
 
1. iSCSI gateways? You mean this one https://docs.ceph.com/docs/mimic/rbd/iscsi-targets/
Does Proxmox support it ok? I can use only software iSCSI. Some nodes cards cant handle HW iSCSI.
No, this is out of scope. But we ship all the Ceph packages for your own endeavor.

2. Ok, but I can't delete existing pool :( I'll try from CLI
Rules can be set afterwards, no need to delete pools.
https://docs.ceph.com/docs/nautilus/rados/operations/pools/#set-pool-values

3. We can't migrate all clients from VMware as we need backup VMs every night and we can't use agents. PVE does not support incremental backup (in your terminology differentials). This is only one thing, why we can't forget vmware :(
Well, we know about these features and are evaluation options. But in the meantime you may need to export the snapshots directly from Ceph.
 
@Alwin Thanks a lot for your reply. We`ll try to do it.

Thanks. Yes, I know about this product. But as you understand backup process not only create VZDump and then backup delta. It is also tape archiving, syncing with disaster site and so on. It`s not easy to make all B&R process by scripts. All clients, who`s SLA not so high we already migrate to PVE. But couple of cluster we have to backup with Netbackup.
 
It`s not easy to make all B&R process by scripts.
Nothing worth doing is easy. VMWare doesnt do this either by itself and you're depending on third party software to do so. The better question in my view is why are you trying to fix what isnt broken (except vsphere 5.5 has been end of support for almost 2 years now and should be considered broken by itself.)

If you're trying to migrate away from vsphere, then its worth going through the trouble of deploying a new backup/dr pipeline. If you want to stay in vmware for PRODUCTION you wouldnt recommend to shoehorn proxmox to be a scaleout SAN solution for vmware as it is not designed for this purpose. bear in mind that at best, there would be no api communication between vsphere and proxmox, which means all backup traffic would take place across your regular vmkernel switches- which means it will be slow.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!