cephfs mount inside LXC

kyriazis

Well-Known Member
Oct 28, 2019
96
5
48
Austin, TX
I was wondering if it's possible to mount a cephfs volume inside an LXC container.

I have an ubuntu LXC container, installed ceph-common, and tried a mount, but it failed as shown below:

root@vis-lxc-02:~# mount.ceph vis-mgmt-1:6789:/ /ceph
modprobe: ERROR: ../libkmod/libkmod.c:586 kmod_search_moddep() could not open moddep file '/lib/modules/5.0.21-3-pve/modules.dep.bin'
modprobe: FATAL: Module ceph not found in directory /lib/modules/5.0.21-3-pve
failed to load ceph kernel module (1)
mount error 110 = Connection timed out
root@vis-lxc-02:~# lsmod | grep ceph
ceph 385024 1
libceph 323584 2 ceph,rbd
fscache 368640 3 ceph,nfsv4,nfs
libcrc32c 16384 5 nf_conntrack,nf_nat,dm_persistent_data,btrfs,libceph
root@vis-lxc-02:~#


the ceph kernel module exists because the host node already has ceph installed.

Thoughts?

Thanks!

george
 
Hi,

I never did it but I guess you have to modify the AppArmor profile to grand access to this module.
 
Revisiting this after a while. :)

The OS on the LXC container is different from the host machine. When mount.ceph inside the container tries to load the module, it uses a /lib/modules path that is not present on neither the container nor the host.

Any ideas?

Thank you!

george
 
We already have a large number of CTs with nfs4 mounts that we want to convert to ceph. Converting them to VMs is a lot more trouble, and they use up more resources.
 
Did you find any solution ?
Using mount point (mp0) (only via command-line), does the job. BUT, it's then impossible to migrate the VM to another host.
Using the kernel driver or the fuse driver both fails because of missing modules...

1. Did you find any solution to mount a cephfs inside an lxc ?
2. Why is Proxmox preventing from migrating an lxc with a mount point that is present on all the hosts ? Is this a bug or feature request ?
 
Last edited:
Did you find any solution to mount a cephfs inside an lxc ?
never tried, but it should be doable as long as
1. the container has a connection bridged to the ceph public subnet
2. its either unconfined (privileged) or you make a profile to allow cephfs (you can use the nfs-allowed profile as a guide)
3. the guest os supports a minimum ceph client version of whatever minimum version your cluster is set to. stick to current flavors.
 
Did you find any solution ?
Using mount point (mp0) (only via command-line), does the job. BUT, it's then impossible to migrate the VM to another host.
Using the kernel driver or the fuse driver both fails because of missing modules...

1. Did you find any solution to mount a cephfs inside an lxc ?
2. Why is Proxmox preventing from migrating an lxc with a mount point that is present on all the hosts ? Is this a bug or feature request ?
You can use shared=1 when creating bind mount point (it has to be shared storage obviously) and you will be able to migrate the container. Ej:
Bash:
pct set 100 -mp0 /mnt/pve/mymp,mp=/mnt/mymp,shared=1
 
  • Like
Reactions: miovee

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!