Cannot mount xfs raw partition inside (privileged) lxc container

Kyle

Active Member
Oct 18, 2017
28
11
43
42
Greetings,

I'd appreciate feedback on if my objective is possible or not, and if I'm doing something wrong.

NB: I have done a bunch of reading/research to self-help and find a solution but so far: nope, not working - I tried a bunch of things without success.

Per topic/subject - My objective is to mount an xfs raw disk partition inside a privileged lxc container.

As I understand things right now this *might* be possible inside a privileged CT, but not in an unprivileged CT?

Code:
pveversion
pve-manager/8.0.3/bbf3993334bfa916 (running kernel: 6.2.16-3-pve)

The CT is a new privileged CT, created with the debian-12-standard_12.0-1_amd64.tar.zst template.

Code:
pct config 102
arch: amd64
cores: 4
features: mount=xfs,nesting=1
hostname: lab
memory: 7629
mp0: /storage/data/mptest/102,mp=/rawtest
net0: name=eth0,bridge=vmbr1,gw=192.168.170.1,hwaddr=06:11:30:F8:DF:58,ip=192.168.170.60/24,type=veth
ostype: debian
parent: clean
rootfs: local-zfs:subvol-102-disk-0,size=16G
swap: 512

The steps to create the raw partition, xfs filesystem, and add the mountpoint to the CT were roughly as follows:

Code:
# performed on the host

zfs create storage/data/mptest

mkdir /storage/data/mptest/102

cd $_

qemu-img create -f raw test.raw 1G

mkfs.xfs test.raw

pct set 102 -mp0 /storage/data/mptest/102,mp=/rawtest

Then I set the following features:

Code:
pct stop 102

pct set 102 --features mount=xfs,nesting=1

pct start 102

Then inside the CT I can confirm that I can read the raw xfs partition:

Code:
root@lab:~# fdisk -l /rawtest/test.raw
Disk /rawtest/test.raw: 1 GiB, 1073741824 bytes, 2097152 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes

BUT alas I cannot mount the xfs raw partition:

Code:
root@lab:~# mount /rawtest/test.raw /mnt/test/
mount: /mnt/test/: mount failed: Operation not permitted.

I did some researching on the issues and tried a few things including modifying the lxc various apparmor profiles in /etc/apparmor.d/lxc to append mount fstype=xfs, but this seemed to have no impact. I was making sure to systemctl reload apparmor.service after making changes.

There was even lxc-default-with-mounting which already included mount fstype=xfs, but this didn't seem to help either. Even if I tried to specify the override to this profile in the CT config file: lxc.apparmor.profile: lxc-container-default-with-mounting.
The only thing the override seemed to do was disable nesting which is not desirable :confused:

There don't appear to be any meaningful entries to share in the journalctl. I've included the entries that appear in the journal (see post footer) during a pct stop/start:

I've hit a dead end and need some help/advice.

One promising stackexchange Q&A can be found here, but alas my attempts are just not working so far.
link: https://unix.stackexchange.com/q/450308/19406 / title: How to allow specific Proxmox LXC containers to mount NFS shares on the network?

Thanks for reading

Code:
Sep 01 18:32:26 cobra pct[1314683]: <root@pam> starting task UPID:cobra:00140F7D:1C708875:64F22E3A:vzstop:102:root@pam:
Sep 01 18:32:26 cobra pct[1314685]: stopping CT 102: UPID:cobra:00140F7D:1C708875:64F22E3A:vzstop:102:root@pam:
Sep 01 18:32:26 cobra kernel: vmbr1: port 3(veth102i0) entered disabled state
Sep 01 18:32:26 cobra kernel: device veth102i0 left promiscuous mode
Sep 01 18:32:26 cobra kernel: vmbr1: port 3(veth102i0) entered disabled state
Sep 01 18:32:26 cobra audit[1314695]: AVC apparmor="STATUS" operation="profile_remove" profile="/usr/bin/lxc-start" name="lxc-102_</var/lib/lxc>" pid=1314695 comm="apparmor_parser"
Sep 01 18:32:26 cobra kernel: audit: type=1400 audit(1693593146.475:490): apparmor="STATUS" operation="profile_remove" profile="/usr/bin/lxc-start" name="lxc-102_</var/lib/lxc>" pid=1314695 comm="apparmor_parser"
Sep 01 18:32:27 cobra pct[1314683]: <root@pam> end task UPID:cobra:00140F7D:1C708875:64F22E3A:vzstop:102:root@pam: OK
Sep 01 18:32:27 cobra systemd[1]: pve-container@102.service: Deactivated successfully.


Sep 01 18:32:28 cobra pct[1314707]: <root@pam> starting task UPID:cobra:00140F94:1C70892F:64F22E3C:vzstart:102:root@pam:
Sep 01 18:32:28 cobra pct[1314708]: starting CT 102: UPID:cobra:00140F94:1C70892F:64F22E3C:vzstart:102:root@pam:
Sep 01 18:32:28 cobra systemd[1]: Started pve-container@102.service - PVE LXC Container: 102.
Sep 01 18:32:28 cobra audit[1314823]: AVC apparmor="STATUS" operation="profile_load" profile="/usr/bin/lxc-start" name="lxc-102_</var/lib/lxc>" pid=1314823 comm="apparmor_parser"
Sep 01 18:32:28 cobra kernel: audit: type=1400 audit(1693593148.707:491): apparmor="STATUS" operation="profile_load" profile="/usr/bin/lxc-start" name="lxc-102_</var/lib/lxc>" pid=1314823 comm="apparmor_parser"
Sep 01 18:32:29 cobra kernel: vmbr1: port 3(veth102i0) entered blocking state
Sep 01 18:32:29 cobra kernel: vmbr1: port 3(veth102i0) entered disabled state
Sep 01 18:32:29 cobra kernel: device veth102i0 entered promiscuous mode
Sep 01 18:32:29 cobra kernel: eth0: renamed from veth6yrrxh
Sep 01 18:32:29 cobra cgroup-network[1314887]: Cannot open pid_from_cgroup() file '/sys/fs/cgroup/lxc/102/tasks'.
Sep 01 18:32:29 cobra cgroup-network[1314887]: running: exec /usr/libexec/netdata/plugins.d/cgroup-network-helper.sh --cgroup '/sys/fs/cgroup/lxc/102'
Sep 01 18:32:29 cobra cgroup-network[1314887]: child pid 1314888 exited with code 1.
Sep 01 18:32:29 cobra cgroup-network[1314896]: Cannot read '/sys/class/net/eth0/ifindex'.
Sep 01 18:32:29 cobra cgroup-network[1314896]: Cannot read '/sys/class/net/eth0/iflink'.
Sep 01 18:32:29 cobra cgroup-network[1314896]: there are not double-linked cgroup interfaces available.
Sep 01 18:32:29 cobra pct[1314707]: <root@pam> end task UPID:cobra:00140F94:1C70892F:64F22E3C:vzstart:102:root@pam: OK
Sep 01 18:32:29 cobra pvestatd[1865]: modified cpu set for lxc/101: 0-2,5
Sep 01 18:32:29 cobra kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready
Sep 01 18:32:29 cobra kernel: vmbr1: port 3(veth102i0) entered blocking state
Sep 01 18:32:29 cobra kernel: vmbr1: port 3(veth102i0) entered forwarding state

AFAIK the cgroup-network errors are related to netdata and aren't a major cause for concern in relation to this post/topic.
 
Note to self: could re-testing after an aa-teardown at least prove that xfs mounts work and that its an AppArmor issue?

It would still be nice to get some feedback from others on this topic.
 
I cannot solve your problem, just ask if you have considered mounting the raw image on the host and bind-mounting the mount instead of the ZFS dataset to the container? This will work also with unpriviledged containers. Another idea is to check if you have permissions to use loopback devices, because you want to mount a disk image (not an partition) in a regular file (not block device), so you need a loopback device to be able to mount the filesystem (needs block access).
 
  • Like
Reactions: Kyle
Another idea is to check if you have permissions to use loopback devices, because you want to mount a disk image (not an partition) in a regular file (not block device), so you need a loopback device to be able to mount the filesystem (needs block access).

Thanks @LnxBil, your info gave me some alternative ideas on how to solve this and pushed me towards a solution.

As per post #1, I'm still not able to figure out how to mount XFS within CT's. So I searched for alts. Maybe it does have to to do with permissions/visiblity of the /dev/loop*. See this thread for more details.

----------------------

Coming back to the topic in #1, there was a relatively straightforward solution in the end. At least for my use case.
My solution came from a web search with keywords: proxmox create container with raw disks

Which lead me to thread "container importdisk" https://forum.proxmox.com/threads/container-importdisk.107938/.

From there I was able to figure out a solution.

Caveat: I think this approach will only work with filesystems that start at sector/byte 0 of the raw image, so if the raw image contains a partition table where the filesystems start at a given offset, this solution is unlikely to work. Why not? You'd need to be able to specify the partition offset when mounting the loop device... e.g. mount -o loop,offset=1048576 which would mount a partition offset 1MiB into the raw image.

As of writing I'm using pve-manager/8.1.4/ec5affc9e41f1d79 (running kernel: 6.5.11-7-pve)

Solution
This worked with a privileged OR unprivileged lxc container:
  1. Datacenter -> Storage - make sure the storage you want use has Content: Containers enabled.

  2. create a raw image file: qemu-img create -f raw test.raw 1G
    The file must be located within the correct relative structure of the Storage you are using.
    My storage was named storage-vm-raw and was located: /storage/data/vm/raw
    The absolute path to the raw file: /storage/data/vm/raw/images/102/test.raw

  3. Format the filesystem: mkfs.xfs -m crc=1,reflink=1 test.raw

  4. add the raw mp: pct set <vmid> -mp0 storage-vm-raw:<vmid>/test.raw,mp=/rawtest,size=1G
    My <vmid> was 102. If you are already using -mp0 then choose the next free mp.
If you check losetup -l on the hypervisor, you'll see that the solution uses loopback devices to make the raw file available to the CT.

Inside the CT the mount info looks likes this:
Code:
mount|grep xfs
/storage/data/vm/raw/images/102/vm-102-disk-1.raw on /rawtest type xfs (rw,relatime,attr2,inode64,logbufs=8,logbsize=32k,noquota)

Here is example CT cfg file:
Code:
arch: amd64
cores: 4
features: nesting=1
hostname: lab
memory: 7629
mp0: storage-vm-raw:102/vm-102-disk-1.raw,mp=/rawtest,size=1G
net0: name=eth0,bridge=vmbr1,gw=192.168.170.1,hwaddr=06:11:30:__:__:__,ip=192.168.170.60/24,type=veth
ostype: debian
rootfs: local-zfs:subvol-102-disk-0,size=16G
swap: 512
unprivileged: 1
 
If I ever figure out how to mount an XFS partition from within a CT I'll update this thread.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!