What is the difference between these ceph notions

fxandrei

Renowned Member
Jan 10, 2013
146
12
83
So im trying to configure a proxmox cluster with ceph.

So from what i can see i can make a pool directly and use it (as in add it to the cluster storage as RBD)

In order to create a CephFS storage in proxmox i need to create 2 sepparate ceph pools and then create the cephfs specifying the pool for data and for metadata , like this :
Code:
ceph osd pool create cephfs_data <pg_num>
ceph osd pool create cephfs_metadata <pg_num>
ceph fs new cephfs cephfs_metadata cephfs_data

Now i can add each of the 3 to the custer storage. The first 2 pools as RBD storage types, and the cephfs as, well, CephFS.

My questions are these:
1. Which pool should i use for VMS ? The "data" pool right ? What do i do with the metadata pool ? What happens to it?
2. How can i copy the files from the pool used for the vms ? Lets say i want to copy them to another pool. Could i mount them somehow ?
3. How should i use the CephFS storage? From what i can see i can only use it for backups. What it its relation to the other 2 pools (data and metadata).
 
So after some investigation, and some a really helpful person named IcePic over at the ceph irc channel i think i have some answers.

So Ceph works with block devices. Everything goes to these block devices .
CephFS only exists to have the contents of these block devices "exposed" as a posix file system.
In order to do that it need an additional pool that functions as a store for the additional information needed to comply with the posix standards.
So if you are using CephFS everything actually goes to the "data" pool, and all the additional info needed to have directories\etc is going to the metadata pool.

From what i understand its not really recommended to use CephFS for vms because of the poor performance compared to the block device alternative (because of the additional compute tasks needed to comply with posix).

So will try to answer my own questions :
1. I should not use the pools created for CephFS. I should create a separate pool and use that directly for vms. CephFS is meant for applications that need access files and directories, or shares.
2. The files, or objects, that reside on the block device can be retrieved to a "normal" file system with the rbd export command
3. CephFS should not be used for vms. Block devices is the way to go. It can be used for backups, shares, or anything that needs access to posix.
 
Use the ceph F's to store your iso backups etc.

Use the rbd pools for your vms eg the drives on them. If u have more than on pool you can use move disk function in proxmox to move the disk between pools. In fact you can use that to move disks on local storage to the ceph storage. You can set the content type at the storage config to say what type of things can be on what storage config.

BTW ceph stores everything as objects the RBD just exposes it as a block devices. The CephFS as file
 
  • Like
Reactions: fxandrei
So Ceph works with block devices. Everything goes to these block devices .
CephFS only exists to have the contents of these block devices "exposed" as a posix file system.
ceph works with objects (all is objects) . a pool is a pool of object.
on top of a pool, you can create rbd devices (block), or a cephfs (fs).

Cephfs have 2 pools, 1 metadata pool where are store the filesystem, and 1 datapool, where are stored the datas of files.
(but all is objects, not on top of a rbd block)


1. I should not use the pools created for CephFS. I should create a separate pool and use that directly for vms. CephFS is meant for applications that need access files and directories, or shares.
you could create rbd inside theses pools , be don't do it ;)

2. The files, or objects, that reside on the block device can be retrieved to a "normal" file system with the rbd export command
no. (it's not in rbd). It's not possible to export a cephfs pool currently

3. CephFS should not be used for vms. Block devices is the way to go. It can be used for backups, shares, or anything that needs access to posix.

you could do it, but they are no befinit. (and the mds is active/standby, so you could have some seconds of timeout in case of failover).
Better to used rbd directly. (no mds, no problem)
 
  • Like
Reactions: fxandrei

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!