Ceph Pool and CephFS setup question [NUC homelab]

chrisj2020

Member
Jul 8, 2020
18
6
8
46
Hi folks,

Can I ask for your help. I'm building my new homelab setup and I think I'm not understanding the sizing aspects of Ceph Pool and CephFS. I have a 3 node NUC10 cluster, each NUC has three network interfaces (I'm using a StarTech dualPort adaptor). I have the Proxmox cluster up, along with Ceph installed on all nodes. Everything looks good at this point.

So I now want to create the Ceph Pool and CephFS. Each NUC has 1 x 500GB nvme drive (this hosts the Proxmox install), along with 1 x 1TB SATA SSD. On the network, I also have iSCSI and NFS presentation of a RAID5 8TB volume. This holds most of my VM and Container data mounts, so I expect the Ceph store to really only hold the container

So given that information - questions I have are below

1. Do I need to create both a Pool and CephFS? Am I right in understanding Pool is there for ISO/Container images only? CephFS is for the Volumes in my Containers/OS's?
2. Can I use the iSCSI or NFS for ISO/Container images?
3. Given the limited storage space I have, how would you configure this?

Thanks in advance,
Chris
 
1. Do I need to create both a Pool and CephFS? Am I right in understanding Pool is there for ISO/Container images only? CephFS is for the Volumes in my Containers/OS's?
No. A pool is always needed. And RBD will hold VM/CT.

2. Can I use the iSCSI or NFS for ISO/Container images?
Yes, up to you.

3. Given the limited storage space I have, how would you configure this?
You didn't really share any limits, just the maximum capacity (a limit in its own right) on the disks.

Best go through the docs to get some insight to how Ceph works. You will need it when disaster strikes. ;)
https://pve.proxmox.com/pve-docs/chapter-pveceph.html
https://docs.ceph.com/en/latest/architecture/
 
Thanks @Alwin ,

I appreciate the help (and links which I've taken a look through). I've tested the configuration by cutting off a node from the 3-node cluster. I'm really impressed with Ceph. It responded to the node failure and brought up services on one of the remaining nodes. But at the same time, I can see why a 10GB LAN is recommended. I have been telling myself I don't need 10Gbe setup for a homelab. The upgrade costs are not easy to swallow!

I'm currently running a 3-NIC x 1GBe setup on each node. (NIC-1) is for VMs connecting to my home LAN. (NIC-2) is for NFS mounts and Proxmox Migration traffic. (NIC-3) is for Ceph only. (NIC-2) and (NIC-3) come from a dual port startech thunderbolt adaptor. If I was to replace this with a single 10Gbe adaptor (leaving the existing NIC-1 in place. Is there any guidance on how to transition the network configurations over for Ceph?

What I think I need to need to do is the following:

1. Pull the existing DualPort adaptor on one of the nodes. (Ceph will error out)
2. Insert the new 10Gbe adaptor, assign the IP etc.
3. Modify ceph.conf configuration file to now have the same network CIDR for "public_network" and "cluster_network".
3. modify "migration:" attribute in datacentre.cfg file to reflect the 10GBe CIDR.
4. reboot

Then re-do the above for the other nodes, one by one. Does that sound right?

Kind regards,
Chris
 
Is there any guidance on how to transition the network configurations over for Ceph?
There is nothing special to it, you just keep the same IPs and are good to go.