CephFS storage limitation?

Discussion in 'Proxmox VE: Installation and configuration' started by Whatever, Feb 13, 2019.

  1. Whatever

    Whatever Member

    Joined:
    Nov 19, 2012
    Messages:
    153
    Likes Received:
    4
    Very exited with Ceph integration in PVE. However there is one point I would be happy to clarify (found nothing with "forum search" so far) - why CephFS storage in PVE is limited to backups,images and templates only? Well I know I can mount folder located in cephfs mount point but very interesting what is behind such limitation?

    Thanks in advance
     
  2. dcsapak

    dcsapak Proxmox Staff Member
    Staff Member

    Joined:
    Feb 1, 2016
    Messages:
    3,361
    Likes Received:
    304
    why would you want to store vm images there? rbd has all the features you get with files, but has a layer less (e.g. no filesystem under the vm disk)
     
    Stop hovering to collapse... Click to collapse... Hover to expand... Click to expand...
  3. Whatever

    Whatever Member

    Joined:
    Nov 19, 2012
    Messages:
    153
    Likes Received:
    4
    1. File based storage is much easier to manage in small environments.
    2. RDB is almost useless when VM disk is linked clone. If you define linked clone with RDB the only backup solution is build-in backup which creates full backup and there is no way to split image back. With file based storage there is at least rsync way to backup disk difference (not the best way I know but at least something)
    3. Recovery from backups of any kind (again specially if linked clones are used) is straight forward
    4. From my understanding there is no difference in VM disk image, ISO and templates. So if there are no special constrains why this limitation defined?
     
    #3 Whatever, Feb 13, 2019
    Last edited: Feb 13, 2019
  4. Alwin

    Alwin Proxmox Staff Member
    Staff Member

    Joined:
    Aug 1, 2017
    Messages:
    2,027
    Likes Received:
    175
    What point do you want to make here? The Proxmox tools take care of most management parts. But I do understand, that one might feel more comfortable with a filesystem then managing a block storage.

    Kind of the same alternative goes here too, you can always export the linked image (also snapshots) through ceph (rbd export ...) and store it away. But as far as management goes, the click of a butten to get a consistent VM image that can be restored on any other supported storage is a win, isn't it?

    Well, see the above argument, goes for recovery as well.

    Subtle differences, VM disks are read/written randomly and perform better without the filesystem layer. Snapshots are better to handle on rbd then on cephfs (for disk images). Containers would either have a disk image then previous statement applies or they would be a directory. As with the later, Cephfs (ATM) doesn't perform well enough with lots of random small read/writes from multiple clients. Also snapshots are not yet as production grade as we want them to be. ISO/templates are written once (one big file) and seldom read by a client.
     
    Stop hovering to collapse... Click to collapse... Hover to expand... Click to expand...
  5. elurex

    elurex Member
    Proxmox Subscriber

    Joined:
    Oct 28, 2015
    Messages:
    149
    Likes Received:
    4
    .
    1. I am very doubtful ceph storage is suitable for you. Ceph is not meant for small environment, just check on ceph minimal requirements you will know.

    2. what about rbd export or export-diff?

    3. what about rbd import ?

    4. for smaller io... cephfs really drop the ball and this is probably why pve not using cephfs for vm disk
     
  6. virtRoo

    virtRoo New Member

    Joined:
    Jan 27, 2019
    Messages:
    22
    Likes Received:
    3
    If the OP simply wants to try out running VMs on CephFS in a test environment, I guess maybe try manually mounting CephFS outside of Proxmox's management on your hypervisors and then make Proxmox treat the mount point as a local directory.

    I haven't got enough hands-on experience in relation to CephFS yet, but when I first was trying out adding CephFS as a storage type in Proxmox and could not make it be used for disk images, I assumed it was some kind of UI problem as NFS can actually be used for such workload.

    Maybe it'd be a good idea to make a note on the https://pve.proxmox.com/wiki/Storage page indicating the reason why Proxmox doesn't have the option to utilise CephFS for running VMs.
     
  7. Whatever

    Whatever Member

    Joined:
    Nov 19, 2012
    Messages:
    153
    Likes Received:
    4
    Yes, you got my point.

    Well, to be honest it's not obvious for me why do not put both images (base and diff) into archive backup without merging them together? And as you said: "VM image can be restored on any other supported storage" But this question is out off scope of this thread.

    Ok. I agree. This makes sense.

    From my perspective 3-Nodes cluster (which fits ceph requirements) - is a small environment. IMHO

    Thanks. Will take a look.

    It mainly depends on the workload on VMs doesn't it?
     
  8. elurex

    elurex Member
    Proxmox Subscriber

    Joined:
    Oct 28, 2015
    Messages:
    149
    Likes Received:
    4
    For your reference,

    .....
    Based on those considerations and operational experience, Mirantis recommends no less than nine-node Ceph clusters for production environments. Recommendation for test, development, or PoC environments is a minimum of five nodes. See details in Ceph cluster sizes.

    And also ceph recommend to separate monitor node and osd node, with minimal of 3 monitor nodes

    3 node might get your ceph cluster up and running but just be very careful.
     
  9. Whatever

    Whatever Member

    Joined:
    Nov 19, 2012
    Messages:
    153
    Likes Received:
    4
    Are we talking about "minimal" or "recommended" requirements? In my environment I'm fully satisfied with Ceph performance (3/2/1024pg) (Intel SSD 46xx series disks and Mellanox Infiniband). Anyway, thanks for sharing you thoughts!
     
  1. This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
    By continuing to use this site, you are consenting to our use of cookies.
    Dismiss Notice