What is the best way to mount a CephFS inside LXC

francoisd

Renowned Member
Sep 10, 2009
53
6
73
Seems a recurring problem, and did not find helpful solution so far.

There are hundreds of cases you might need to mount CephFS inside your LXCs.
In my case, I need to store PostgreSQL WAL archives for PITR. Performances are not important. I have a 3 nodes PVE cluster.

I tested :
  • Standard cephfs mount. Of course it does not work since it cannot access the cephfs kernel driver from LXC.
  • FUSE, doesn't work neither: `fuse: device not found, try 'modprobe fuse' first` despite activating the 'FUSE' feature for the LXC from the Proxmox interface.
  • Mount the CephFS on the PVE hosts, and bind mount into the LXC by issuing a command like :
    pct set 105 -mp0 /mnt/pve/cephfs,mp=/mnt/cephfs
    The CephFS is accessible from LXC, but Snapshots are not working any more, nor the LXC migrations :
    migration aborted (duration 00:00:00): cannot migrate local bind mount point 'mp0'

So I'm back to square one.

How can we access a CephFS from LXC, still keeping the LXC snapshots and LXC migrations ?
Or is it just not possible ?

Thanks,
 
Last edited:
I use mp0: /mnt/cephfs,mp=/mnt/cephfs,shared=1 in the config file and it seems to work quite well for me
 
shared=1 might be the magic word. I'll try when back from holidays. Tks.
Did
I use mp0: /mnt/cephfs,mp=/mnt/cephfs,shared=1 in the config file and it seems to work quite well for me
work for you? I tried to get 2 CephFS working in an LXC a while back and eventually gave up. If this did work, could you explain the steps you used to get it working please?
 
Did

work for you? I tried to get 2 CephFS working in an LXC a while back and eventually gave up. If this did work, could you explain the steps you used to get it working please?

It definitely allows me to migrate the containers between the nodes. I use Proxmox Backup Server, so don't really use the snapshots. I just checked and indeed the snapshots are disabled for the configuration with the mount points. I guess you can use the backups as a workaround. Note that there is a 'backup=1' parameter to the mount point which includes the mount point into the backup (by default it is not)
 
I know it is a lot of effort but what are the steps and configs you used to get this working? I have 2 pools, one SSD one HDD. I need to have both of these pools mounted in each of my containers.
 
So, you mounted both cephfs filesystems to each of your proxmox nodes?
After that you just modify the container config and add the mp0 and mp1 lines, each with the 'shared=1' option
 
Yes, I have 4 nodes, each has the SDD at /mnt/pve/cephfs/export1/ and the HDD at /mnt/pve/cephfs_hdd/export2/.
So if I wanted to mount these in each container I need to make the following entries into the container config files on the node? For instance /etc/pve/nodes/pve1/lxc/100.conf on pve1?
Code:
mp0: /mnt/cephfs,mp=/mnt/pve/cephfs/export1/,shared=1
mp0: /mnt/cephfs,mp=/mnt/pve/cephfs_hdd/export2/,shared=1
 
I believe it's host mount, then the mp parameter specifies where it should be mounted within the container, so it should be something like:

Code:
mp0: /mnt/pve/cephfs/export1,mp=/mnt/export1,shared=1
mp1: /mnt/pve/cephfs/export2,mp=/mnt/export2,shared=1
 
I believe it's host mount, then the mp parameter specifies where it should be mounted within the container, so it should be something like:

Code:
mp0: /mnt/pve/cephfs/export1,mp=/mnt/export1,shared=1
mp1: /mnt/pve/cephfs/export2,mp=/mnt/export2,shared=1
And those entries are made in the /etc/pve/nodes/pve1/lxc/100.conf for container 100 if that is the LXC I want the mount points in? Thanks for your answer.
 
I couldn't figure out a mistake (non-existent mount point on the pve) but after that it worked perfectly. I really appreciate the help.
 
After adding the mnt line in the conf file of my lxc turnkey-fileshare, the container fails to start. Remove the line and it starts fine.
 

Attachments

  • IMG_0303.jpeg
    IMG_0303.jpeg
    854.2 KB · Views: 32
  • IMG_0304.jpeg
    IMG_0304.jpeg
    320.9 KB · Views: 38
I had tried it both ways. It doesn't matter.

The LXC configuration:
1705844239511.png

The conf file:
Code:
:
 arch:amd64
cores: 2
features: nesting=1
hostname: Nostromos
memory: 512
mp0: /mnt/Sulaco,mp=/mnt/pve/NostromosNFS,shared=1
net0: name=eth0,bridge=vmbr0,firewall=1,gw=192.168.50.1,hwaddr=BC:24:11:26:9D:83,ip=192.168.75.20/16,type=veth
onboot: 1
ostype: debian
rootfs: Data0:vm-200000-disk-0,size=8G
swap: 512

I started the container --debug.
root@Immortal:/mnt/pve/NostromosNFS# pct start 200000 --debug run_buffer: 322 Script exited with status 2 lxc_init: 844 Failed to run lxc.hook.pre-start for container "200000" __lxc_start: 2027 Failed to initialize container "200000" for container "200000", config section "lxc" DEBUG conf - ../src/lxc/conf.c:run_buffer:311 - Script exec /usr/share/lxc/hooks/lxc-pve-prestart-hook 200000 lxc pre-start produced output: /dev/rbd0 DEBUG conf - ../src/lxc/conf.c:run_buffer:311 - Script exec /usr/share/lxc/hooks/lxc-pve-prestart-hook 200000 lxc pre-start produced output: directory '/mnt/Sulaco' does not exist ERROR conf - ../src/lxc/conf.c:run_buffer:322 - Script exited with status 2 ERROR start - ../src/lxc/start.c:lxc_init:844 - Failed to run lxc.hook.pre-start for container "200000" ERROR start - ../src/lxc/start.c:__lxc_start:2027 - Failed to initialize container "200000" INFO conf - ../src/lxc/conf.c:run_script_argv:338 - Executing script "/usr/share/lxcfs/lxc.reboot.hook" for container "200000", config section "lxc" startup for container '200000' failed root@Immortal:/mnt/pve/NostromosNFS#
 
/etc/pve/storage.cfg
dir: local path /var/lib/vz content vztmpl,iso,backup lvmthin: local-lvm thinpool data vgname pve content rootdir,images rbd: Data0 content images,rootdir krbd 0 pool Data0 cephfs: bellicoseNAS path /mnt/pve/bellicoseNAS content vztmpl,iso,backup fs-name bellicoseNAS cephfs: NostromosNFS path /mnt/pve/NostromosNFS content iso,vztmpl,backup fs-name NostromosNFS
 
root@Nostromos /mnt/Sulaco# Does exist Well I had created it on the container image. Now that I have created it on the host. Voila...
To little sleep last night..
 
Of course it makes a difference because the folder will certainly not be there on the node.

See: https://pve.proxmox.com/pve-docs/pve-admin-guide.html#pct_mount_points
It didn't matter because the container failed to start either way. I was pretty tired last night and failed to start it with debug to see what was failing. Now with fresh eyes and some coffee, the core issue was the lack of the mnt point on the host.<edit> connecting to the cephFS mnt point on the host.</edit> long time proxmox user doing basic VMs for various things, but first time starting in with Ceph, CephFS, Containers etc.
Thank you for taking the time to read and post. You can only see what I post and not what is in my head, so I do appreciate it. Most likely, where i left off last night, I would have run back into the syntax problem after I got the conf file correct. These forums have been awesome for helping me since I put up proxmox on my first R610.
On to the next problem :D
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!