cephfs and LXC .. a voyage of discovery

leex12

Member
Mar 11, 2022
36
1
13
53
Over the past few days .. feels like years I have been trying to get my containers working correctly with cephfs. Google really has not been my friend during this process because there is a lot of old stuff out there and it doesn't seem to be a popular area (?)

For the past year or so I have been using a docker plugin (n0r1skcom/docker-volume-cephfs) to mount cephfs within a docker container. Works ok but seems slower than it should be.

So I thought I would try out getting them mapped within the lxc rather than in docker to see if that was better performing. NOTE as I want to use high availability i the container mapping isn't an option

Wasted a whole bunch of time looking around to see if kernel drivers should work for this. They don't, so on wards with ceph fuse!

Pulled down lastest trunkey core template update/upgrade and then installed ceph-common and ceph-fuse packages. Fiddling around I could get it to work to get to one of my filessytems but not the other. The attribute client_fs= just was not recognized. Pulled down ubuntu template update/upgeade installed comon and fuse and it worked straight away

Long story short the ceph version being picked up by debian turnkey core is like two versions behind what ubuntu pulls down automatically. So have amended the sources on turkey core to pick up ceph repository and updated and now both are on quincy 17.2

Is doing this really stupid?
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!