[SOLVED] CEPH cluster Proxmox

noname01

New Member
Feb 6, 2024
28
1
3
Hello, to set the context, I have a proxmox cluster with 3 nodes, I also have a ceph cluster with 3 nodes, everything is fine and everything works well independently.

But, before linking the ceph cluster and the proxmox cluster (CephFS), I want a linux/windows client to use the ceph cluster as if it were network storage.

I've tried this, but all I get is errors for Linux. For Windows, I've installed Dokan and the Ceph software, but it doesn't work. I've had nothing but errors.

I haven't seen a recent tutorial since 2022, so my question is, is it still possible, if it's possible and you're willing to help me, tell me if you want logs, log errors, other...

Thanks !
 
It's possible to share data from your Ceph cluster with NFS https://docs.ceph.com/en/latest/cephfs/nfs/
If you share via NFS, you can mount the NFS share inside the VM and you can interact like a regular network storage instead of trying to expose CephFS directly.
 
  • Like
Reactions: noname01
It's possible to share data from your Ceph cluster with NFS https://docs.ceph.com/en/latest/cephfs/nfs/
If you share via NFS, you can mount the NFS share inside the VM and you can interact like a regular network storage instead of trying to expose CephFS directly.
Any advantages to doing this vs giving a VM a ceph block drive and sharing it out through NFS? I currently have an open media vault server sharing NFS with block storage. Was looking to cut out OMV and just share directly with CephFS. Actually was researching how to do this but seems I may introduce some complications that are not worth it.
 
The question here is what you are currently doing with the NFS share and what you expect from CephFS. Also relevant is what you might want to do with it in the future.

CephFS is natively implemented by CEPH, so no additional service or VM is required in between. However, it also has certain disadvantages and, for example, it is not suitable for many small files; there is a certain limit of around 100,000 files that you should not exceed.

It is ideal for sharing ISOs, VM images, etc. Even if you run a web server farm and keep the htdocs directory there, this can still work. But if you start building a file server out of it or want to backup entire servers to it using rsync, then it gradually stops.

Actually was researching how to do this but seems I may introduce some complications that are not worth it.
Can you elaborate on this? Because CephFS can actually run on Proxmox VE within 2 minutes. Your VM must have access to the CEPH network in order to integrate this. But it's nothing that's really dramatically complex.
 
The question here is what you are currently doing with the NFS share and what you expect from CephFS. Also relevant is what you might want to do with it in the future.

CephFS is natively implemented by CEPH, so no additional service or VM is required in between. However, it also has certain disadvantages and, for example, it is not suitable for many small files; there is a certain limit of around 100,000 files that you should not exceed.

It is ideal for sharing ISOs, VM images, etc. Even if you run a web server farm and keep the htdocs directory there, this can still work. But if you start building a file server out of it or want to backup entire servers to it using rsync, then it gradually stops.


Can you elaborate on this? Because CephFS can actually run on Proxmox VE within 2 minutes. Your VM must have access to the CEPH network in order to integrate this. But it's nothing that's really dramatically complex.
Wanting a file server basically. Hosting ISO's, vm images, plus other type of files. Images videos etc...

Had a pool running wasnt sure how to export a mount and how that mount would fail over if I lost a node. Basically how the client should connect. Think its the architecture I am struggling with.

If a node fails I need to have multiple monitor addresses in the client connection? The monitor is what actually serves?

New to ceph
 
Wanting a file server basically. Hosting ISO's, vm images, plus other type of files. Images videos etc...

Had a pool running wasnt sure how to export a mount and how that mount would fail over if I lost a node. Basically how the client should connect. Think its the architecture I am struggling with.

If a node fails I need to have multiple monitor addresses in the client connection? The monitor is what actually serves?

New to ceph

This is exactly where I'm at.

I'm a total newb when it comes to Proxmox and pretty green on virtualization in general. I've played with Hyper-V and making VHDxs but that's about it. I've been ideating on a way to make a high availability storage system and came across using GlusterFS and TrueNAS, but now Gluster is being abandoned and Ceph seems like the best-supported platform for data redundancy across physical nodes.

But I can't seem to find any good documentation of how to use a Ceph pool as a target for NAS SMB/CIFs storage.

So I just said eff it, and I tried to do it myself. Insert gif of dog floating in outer space.

I set up a test 3-node Proxmox VE cluster and I configured Ceph with min 2 / std 3 nodes. Each node has 2 drives set up as Ceph OSDs, and I've created 2 Ceph Pools for different use cases.

This won't be a long-term setup, but I spun up a new TrueNAS scale VM on my first node, gave it a couple CPUs and half the RAM, and once I got into the weeds on configuring it, sure enough, the two pools appeared as storage that I could create partitions on! After figuring out how to create shares on TrueNAS (also a newb there), I shared a folder out to my Windows machines and got a transfer going.

Now, just to set the stage here, this is not an ideal setup. Each of these nodes has an i5-3570, 16GB of DDR3, the VM host is on a SATA SSD, and they are connected with a single GbE NIC - so no dedicated backhaul for Ceph yet.

But, all that aside, it fucking works. File transfer from my GbE workstation, to a TrueNAS VM on Prox1, passing through to a Ceph cluster on Prox1-3.

1711418865796.png

Next up I wanted to test whether this thing would actually survive a 'disaster.' Started a 5GB file transfer and then sent a shutdown command to Prox3. Transfer speed fell, but continued, and afterward the transferred file showed a matching SHA hash.

1711418975007.png

So I'm very excited about this. I'm ordering some dual SFP+ NICs for the nodes, and plan on moving the TrueNAS VM over to a PVE host that isn't part of the Ceph cluster, then figuring out fallback in case of TrueNAS host failure.

1711418529507.png
 
@RyanMM - What is your plan for your long term solution? I ask as I have a 3 node Ceph cluster and would also like to have NFS and CIFS shares as well using the Ceph storage pool.
 
I'm still kinda building out the idea, it's gonna depend on how things perform once I add more nodes and provide faster and separate backhaul.

The nodes I have aren't very powerful, so I don't think hosting the TrueNAS VM on them makes sense. I have an old OCP server that's a beast with RAM and cores though, which I currently use for Hyper-V, and I think maybe making that another member of the Prox cluster but only using that for VMs like TrueNAS, combined with some good backhauls for the Ceph cluster, should make a really good backup/storage target. The nodes at that point just need to be dumb interfaces to the drives for Ceph.

I also gotta figure out how to make the TrueNAS VM failover if needed.
 
You can use the HA features of Proxmox to have the VM fail over to another node automatically...My back hauls are not very quick right now with only using LCAP bonded 1GB links but it works pretty decent for my needs and will be scaling up to better hardware once i have a full plan on the software architecture and know exactly where I need to spend hardware money.

I wonder if there is a way to mount the cephFS directly into TrueNAS like you can in Debian...though I know TrueNAS is more an appliance type system. Or should I do something like Debian 12 and do Cockpit or Webmin to do NFS and CIFS?
 
I'm not using CephFS, as far as I know. The Ceph Pool is what's being mounted.

1711422295510.png
1711422463665.png

1711422378734.png
 

Attachments

  • 1711422451885.png
    1711422451885.png
    124.2 KB · Views: 5
You are creating 2 virtual disks, I assume 1 for the TrueNAS OS and the other as your Data disk.
 
Right now the TrueNAS VM is actually on the host OS SSD - that's scsi0. I'm guessing I'm gonna have to use replication or something to get that onto other host SSDs for failover.

Scsi1 is the disk on the Ceph-Sharing pool.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!