Help understand relationship between ceph pools

troycarpenter

Renowned Member
Feb 28, 2012
103
8
83
Central Texas
I'm trying to understand how the Ceph pools are all working together. Before Proxmox 5.3. On this cluster, I have three nodes with four 2TB drives in each node (for roughly 22 TB total disk space after overhead). I had a single pool with 512 PGs in 3/2 configuration that was used to create a RDB disk image store. Unfortunately, I didn't pay close enough attention to the exact size of storage that created, but currently it sits at 6.5 TB after I created a cephFS storage.

After the 5.3 upgrade, I have added a cephFS storage, using the default 128 PGs (also in 3/2 per default). The corresponding storage size is 5.36TB.

Are these pools completely separate, or are they actually sharing the total OSD space available? If they are separate, how do I determine the size, specifically for the cephFS pool.

I ask because in my cluster, I've historically had separate backup storage configurations, one for a daily backup that keeps 6 days of backups, and a weekly backup that keeps 4 backups. I also have another backup storage for archives of VMs that are no longer in use, but may need to be brought back in the future. To keep that backup scheme, I would need to create two more cephFS pools for weekly and archival backups. I don't want to go creating multiple cephFS entries and pools if that's going to be detrimental.

I'm not sure, but what I am asking may be listed as an experimental feature:

http://docs.ceph.com/docs/master/cephfs/experimental-features/
 
Last edited:
Are these pools completely separate, or are they actually sharing the total OSD space available? If they are separate, how do I determine the size, specifically for the cephFS pool.
Depending on your cursh map, they may or may not share the same OSDs. That said, on a default setup, they do.

I don't want to go creating multiple cephFS entries and pools if that's going to be detrimental.
You shouldn't and you don't need to, the cephfs client allows to mount subfolders. You need to edit the storage.cfg file.
https://pve.proxmox.com/pve-docs/chapter-pvesm.html#storage_cephfs
 
Hello,

Depending on your cursh map, they may or may not share the same OSDs. That said, on a default setup, they do.


You shouldn't and you don't need to, the cephfs client allows to mount subfolders. You need to edit the storage.cfg file.
https://pve.proxmox.com/pve-docs/chapter-pvesm.html#storage_cephfs

so I'd like to ask, how it is then possible to have two different cephfs locations - one on ssd pool, one on hdd pool?

Background:
While running production cluster hyperconverged using pve 4.4 and ceph hammer (I know ...)
I'm going to setup 1 new and two "interim" servers with proxmox 5.3/ceph and then migrate/rsync VMs and data.
When stable, recreate old nodes with Proxmox 5.3.

As we have ssdpool and satapool on 4.4, our samba VM has 2 rbd disks attached using ssd pool
(live data, filestore with journal on nvme) and 1 rbd disk using sata pool (archive, hdds).

So I thought, I could create several cephfs pools, using samba with vfs ceph.
With only one cephfs - is there any way how to do this (share live data from ssd, archive data from hdd pool)?

Sure I can just stick using rbd attached to my samba vm. But then, I have no option to later go with samba/ctdb ...

Any suggestions on this?

Thank you very much
Falko
 
so I'd like to ask, how it is then possible to have two different cephfs locations - one on ssd pool, one on hdd pool?
For pools in general. https://pve.proxmox.com/pve-docs/chapter-pveceph.html#_ceph_crush_amp_device_classes

So I thought, I could create several cephfs pools, using samba with vfs ceph.
This is still experimental and not recommended or supported.
http://docs.ceph.com/docs/luminous/...s/#multiple-filesystems-within-a-ceph-cluster
 
Thank you very much. So for now, I'll try to use cephfs for live data with ssd pool and rbd, or maybe nfs, for archive on hdd pool -
as may be redundancy for the latter is not so important.
 
Hi Alex,

this seems pretty good news, thank you. Yes, I'm planning ssd and hdd pools using device classes.
So, I have to find out, how to do that and if this works on proxmox, too.

Regarding the above mentioned experimental feature on using snapshots - it is not clear to me,
if this could be related when using different pools. But maybe has something to do with the mds servers.

Surely I don't want to put something in production use, what is not really supported.

Best regards
 
if I read you correctly, you have two pools; there is nothing stopping you from setting up cephfs on each pool; the admonition against multiple cephfs file systems are per pool.
Can you please point me to the information for that? As the documentation doesn't state a per pool basis and the commands I find in the link below, globally set enable_multiple to true.
http://docs.ceph.com/docs/luminous/cephfs/administration/
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!