CephFS storage limitation?

Whatever

Renowned Member
Nov 19, 2012
383
58
93
Very exited with Ceph integration in PVE. However there is one point I would be happy to clarify (found nothing with "forum search" so far) - why CephFS storage in PVE is limited to backups,images and templates only? Well I know I can mount folder located in cephfs mount point but very interesting what is behind such limitation?

Thanks in advance
 
why would you want to store vm images there? rbd has all the features you get with files, but has a layer less (e.g. no filesystem under the vm disk)
 
1. File based storage is much easier to manage in small environments.
2. RDB is almost useless when VM disk is linked clone. If you define linked clone with RDB the only backup solution is build-in backup which creates full backup and there is no way to split image back. With file based storage there is at least rsync way to backup disk difference (not the best way I know but at least something)
3. Recovery from backups of any kind (again specially if linked clones are used) is straight forward
4. From my understanding there is no difference in VM disk image, ISO and templates. So if there are no special constrains why this limitation defined?
 
Last edited:
1. File based storage is much easier to manage in small environments.
What point do you want to make here? The Proxmox tools take care of most management parts. But I do understand, that one might feel more comfortable with a filesystem then managing a block storage.

2. RDB is almost useless when VM disk is linked clone. If you define linked clone with RDB the only backup solution is build-in backup which creates full backup and there is no way to split image back. With file based storage there is at least rsync way to backup disk difference (not the best way I know but at least something)
Kind of the same alternative goes here too, you can always export the linked image (also snapshots) through ceph (rbd export ...) and store it away. But as far as management goes, the click of a butten to get a consistent VM image that can be restored on any other supported storage is a win, isn't it?

3. Recovery from backups of any kind (again specially if linked clones are used) is straight forward
Well, see the above argument, goes for recovery as well.

4. From my understanding there is no difference in VM disk image, ISO and templates. So if there are no special constrains why this limitation defined?
Subtle differences, VM disks are read/written randomly and perform better without the filesystem layer. Snapshots are better to handle on rbd then on cephfs (for disk images). Containers would either have a disk image then previous statement applies or they would be a directory. As with the later, Cephfs (ATM) doesn't perform well enough with lots of random small read/writes from multiple clients. Also snapshots are not yet as production grade as we want them to be. ISO/templates are written once (one big file) and seldom read by a client.
 
1. File based storage is much easier to manage in small environments.
2. RDB is almost useless when VM disk is linked clone. If you define linked clone with RDB the only backup solution is build-in backup which creates full backup and there is no way to split image back. With file based storage there is at least rsync way to backup disk difference (not the best way I know but at least something)
3. Recovery from backups of any kind (again specially if linked clones are used) is straight forward
4. From my understanding there is no difference in VM disk image, ISO and templates. So if there are no special constrains why this limitation defined?
.
1. I am very doubtful ceph storage is suitable for you. Ceph is not meant for small environment, just check on ceph minimal requirements you will know.

2. what about rbd export or export-diff?

3. what about rbd import ?

4. for smaller io... cephfs really drop the ball and this is probably why pve not using cephfs for vm disk
 
If the OP simply wants to try out running VMs on CephFS in a test environment, I guess maybe try manually mounting CephFS outside of Proxmox's management on your hypervisors and then make Proxmox treat the mount point as a local directory.

I haven't got enough hands-on experience in relation to CephFS yet, but when I first was trying out adding CephFS as a storage type in Proxmox and could not make it be used for disk images, I assumed it was some kind of UI problem as NFS can actually be used for such workload.

Maybe it'd be a good idea to make a note on the https://pve.proxmox.com/wiki/Storage page indicating the reason why Proxmox doesn't have the option to utilise CephFS for running VMs.
 
But I do understand, that one might feel more comfortable with a filesystem then managing a block storage.

Yes, you got my point.

Kind of the same alternative goes here too, you can always export the linked image (also snapshots) through ceph (rbd export ...) and store it away. But as far as management goes, the click of a butten to get a consistent VM image that can be restored on any other supported storage is a win, isn't it?

Well, to be honest it's not obvious for me why do not put both images (base and diff) into archive backup without merging them together? And as you said: "VM image can be restored on any other supported storage" But this question is out off scope of this thread.

Subtle differences, VM disks are read/written randomly and perform better without the filesystem layer. Snapshots are better to handle on rbd then on cephfs (for disk images). Containers would either have a disk image then previous statement applies or they would be a directory. As with the later, Cephfs (ATM) doesn't perform well enough with lots of random small read/writes from multiple clients. Also snapshots are not yet as production grade as we want them to be. ISO/templates are written once (one big file) and seldom read by a client.

Ok. I agree. This makes sense.

.
1. I am very doubtful ceph storage is suitable for you. Ceph is not meant for small environment, just check on ceph minimal requirements you will know.

From my perspective 3-Nodes cluster (which fits ceph requirements) - is a small environment. IMHO

.
2. what about rbd export or export-diff?
3. what about rbd import ?

Thanks. Will take a look.

.
4. for smaller io... cephfs really drop the ball and this is probably why pve not using cephfs for vm disk

It mainly depends on the workload on VMs doesn't it?
 
For your reference,

.....
Based on those considerations and operational experience, Mirantis recommends no less than nine-node Ceph clusters for production environments. Recommendation for test, development, or PoC environments is a minimum of five nodes. See details in Ceph cluster sizes.

And also ceph recommend to separate monitor node and osd node, with minimal of 3 monitor nodes

3 node might get your ceph cluster up and running but just be very careful.
 
For your reference,

.....
Based on those considerations and operational experience, Mirantis recommends no less than nine-node Ceph clusters for production environments. Recommendation for test, development, or PoC environments is a minimum of five nodes. See details in Ceph cluster sizes.

And also ceph recommend to separate monitor node and osd node, with minimal of 3 monitor nodes

3 node might get your ceph cluster up and running but just be very careful.

Are we talking about "minimal" or "recommended" requirements? In my environment I'm fully satisfied with Ceph performance (3/2/1024pg) (Intel SSD 46xx series disks and Mellanox Infiniband). Anyway, thanks for sharing you thoughts!
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!