read-only bind mount a subvolume from other container prevents container start

h0a

Member
Sep 28, 2021
15
0
6
I have two containers A and B.

Container A #101 has only a rootfs:
Code:
arch: amd64
cores: 1
hostname: A
memory: 2048
nameserver: 127.0.0.1
net0: name=eth0,bridge=vmbr0,firewall=1,gw=x.x.x.1,hwaddr=xx:xx:xx:xx:xx:xx,ip=x.x.x.7/24,type=veth
onboot: 1
ostype: alpine
rootfs: data-lvmthin:vm-101-disk-0,size=8G
startup: order=2
swap: 2048
unprivileged: 1

Container B #102 has its volumes spread over three different storages:
(some subvolumes are named 103, because they were taken over some time ago from another container 103)

1) rootfs on an SSD RAID10 LVM thin storage
2) two mount points on an single M.2 SSD LVM thin storage
3) and the other mount points on a ZFS HDD RAID.

Code:
arch: amd64
cores: 8
features: nesting=1
hostname: B
memory: 24576
mp0: m2ssd-lvmthin:vm-102-disk-0,mp=/mnt/subdirs/12,backup=1,size=1400G
mp1: DATA-zfs:subvol-103-disk-0,mp=/mnt/subdirs/11,backup=1,size=6000G
mp2: m2ssd-lvmthin:vm-102-disk-1,mp=/mnt/subdirs/7,backup=1,size=700G
mp3: DATA-zfs:subvol-103-disk-2,mp=/mnt/subdirs/13,backup=1,size=6000G
mp4: DATA-zfs:subvol-103-disk-3,mp=/mnt/subdirs/14,backup=1,size=6000G
mp5: DATA-zfs:subvol-103-disk-4,mp=/mnt/subdirs/16,backup=1,size=6000G
mp6: DATA-zfs:subvol-102-disk-0,mp=/mnt/subdirs/15,backup=1,size=200G
net0: name=eth0,bridge=vmbr0,firewall=1,gw=x.x.x.1,gw6=xx:...:xx,hwaddr=xx:xx:xx:xx:xx:xx,ip=x.x.x.2/24,ip6=xx:...:xx/64,type=veth
onboot: 1
ostype: ubuntu
rootfs: data-lvmthin:vm-102-disk-0,size=789G
startup: order=5
swap: 24576
unprivileged: 1
unused0: DATA-zfs:subvol-103-disk-1
unused1: DATA-zfs:subvol-102-disk-1

Now I am trying to bind mount one of the subvolumes of ct 102 to ct 101, but read-only.
It used to work before, when the mount point was still on the ZFS RAID.
Just adding the parameter ro=1 as per documentation did the job.
The new volume m2ssd-lvmthin:vm-102-disk-0 is on a single M.2 SSD drive.
Now, with ro=1 the container 101 will not start.
Without the flag it will start without problems.
Here is the conf line for the mount point:
Code:
mp0: m2ssd-lvmthin:vm-102-disk-0,mp=/mnt/ct102-bindmount,size=1400G,ro=1

Starting the Container with the GUI or via PCT fails.

The logs are not very informative, other than "mount point busy" (see below), I can not find anything else in the logs about why it fails.

Bind mounting read-only used to work until recently.

Does anybody have any idea how to get the subvolume bind-mounted read-only?

Here is the error outputs when starting with lxc-start:

Code:
root@server:~# lxc-start -F -n 101 -l TRACE -o 101.lxc.log 
lxc-start: 101: ../src/lxc/conf.c: run_buffer: 322 Script exited with status 32 
lxc-start: 101: ../src/lxc/start.c: lxc_init: 844 Failed to run lxc.hook.pre-start for container "101" 
lxc-start: 101: ../src/lxc/start.c: __lxc_start: 2027 Failed to initialize container "101" 
lxc-start: 101: ../src/lxc/tools/lxc_start.c: main: 306 The container failed to start 
lxc-start: 101: ../src/lxc/tools/lxc_start.c: main: 311 Additional information can be obtained by setting the --logfile and --logpriority options

The log file outputs:

Code:
lxc-start 101 20240306XXXXX8.358 INFO     confile - ../src/lxc/confile.c:set_config_idmaps:2273 - Read uid map: type u nsid 0 hostid 100000 range 65536
lxc-start 101 20240306XXXXX8.358 INFO     confile - ../src/lxc/confile.c:set_config_idmaps:2273 - Read uid map: type g nsid 0 hostid 100000 range 65536
lxc-start 101 20240306XXXXX8.358 TRACE    commands - ../src/lxc/commands.c:lxc_cmd:514 - Connection refused - Command "get_init_pid" failed to connect command socket
lxc-start 101 20240306XXXXX8.358 TRACE    commands - ../src/lxc/commands.c:lxc_cmd:514 - Connection refused - Command "get_state" failed to connect command socket
lxc-start 101 20240306XXXXX8.358 TRACE    commands - ../src/lxc/commands.c:lxc_server_init:2129 - Created abstract unix socket "/var/lib/lxc/101/command"
lxc-start 101 20240306XXXXX8.358 TRACE    start - ../src/lxc/start.c:lxc_init_handler:754 - Unix domain socket 4 for command server is ready
lxc-start 101 20240306XXXXX8.359 TRACE    start - ../src/lxc/start.c:lxc_start:2221 - Doing lxc_start
lxc-start 101 20240306XXXXX8.359 INFO     lsm - ../src/lxc/lsm/lsm.c:lsm_init_static:38 - Initialized LSM security driver AppArmor
lxc-start 101 20240306XXXXX8.359 TRACE    start - ../src/lxc/start.c:lxc_init:778 - Initialized LSM
lxc-start 101 20240306XXXXX8.359 TRACE    start - ../src/lxc/start.c:lxc_serve_state_clients:483 - Set container state to STARTING
lxc-start 101 20240306XXXXX8.359 TRACE    start - ../src/lxc/start.c:lxc_serve_state_clients:486 - No state clients registered
lxc-start 101 20240306XXXXX8.359 TRACE    start - ../src/lxc/start.c:lxc_init:784 - Set container state to "STARTING"
lxc-start 101 20240306XXXXX8.359 TRACE    start - ../src/lxc/start.c:lxc_init:840 - Set environment variables
lxc-start 101 20240306XXXXX8.359 INFO     conf - ../src/lxc/conf.c:run_script_argv:338 - Executing script "/usr/share/lxc/hooks/lxc-pve-prestart-hook" for container "101", config section "lxc"
lxc-start 101 20240306XXXXX8.764 DEBUG    conf - ../src/lxc/conf.c:run_buffer:311 - Script exec /usr/share/lxc/hooks/lxc-pve-prestart-hook 101 lxc pre-start produced output: mount: /var/lib/lxc/.pve-staged-mounts/mp0: /dev/mapper/m2ssd--vg-vm--102--disk--0 already mounted or mount point busy.

lxc-start 101 20240306XXXXX8.764 DEBUG    conf - ../src/lxc/conf.c:run_buffer:311 - Script exec /usr/share/lxc/hooks/lxc-pve-prestart-hook 101 lxc pre-start produced output: command 'mount -o ro /dev/dm-XY /var/lib/lxc/.pve-staged-mounts/mp0' failed: exit code 32

lxc-start 101 20240306XXXXX8.775 ERROR    conf - ../src/lxc/conf.c:run_buffer:322 - Script exited with status 32
lxc-start 101 20240306XXXXX8.775 ERROR    start - ../src/lxc/start.c:lxc_init:844 - Failed to run lxc.hook.pre-start for container "101"
lxc-start 101 20240306XXXXX8.775 ERROR    start - ../src/lxc/start.c:__lxc_start:2027 - Failed to initialize container "101"
lxc-start 101 20240306XXXXX8.775 TRACE    start - ../src/lxc/start.c:lxc_serve_state_clients:483 - Set container state to ABORTING
lxc-start 101 20240306XXXXX8.775 TRACE    start - ../src/lxc/start.c:lxc_serve_state_clients:486 - No state clients registered
lxc-start 101 20240306XXXXX8.775 TRACE    start - ../src/lxc/start.c:lxc_serve_state_clients:483 - Set container state to STOPPING
lxc-start 101 20240306XXXXX8.775 TRACE    start - ../src/lxc/start.c:lxc_serve_state_clients:486 - No state clients registered
lxc-start 101 20240306XXXXX8.775 TRACE    start - ../src/lxc/start.c:lxc_end:963 - Closed command socket
lxc-start 101 20240306XXXXX8.775 TRACE    start - ../src/lxc/start.c:lxc_end:974 - Set container state to "STOPPED"
lxc-start 101 20240306XXXXX8.775 INFO     conf - ../src/lxc/conf.c:run_script_argv:338 - Executing script "/usr/share/lxc/hooks/lxc-pve-poststop-hook" for container "101", config section "lxc"
lxc-start 101 20240306XXXXX9.164 DEBUG    conf - ../src/lxc/conf.c:run_buffer:311 - Script exec /usr/share/lxc/hooks/lxc-pve-poststop-hook 101 lxc post-stop produced output: umount: /var/lib/lxc/.pve-staged-mounts/mp3: not mounted.

lxc-start 101 20240306XXXXX9.164 DEBUG    conf - ../src/lxc/conf.c:run_buffer:311 - Script exec /usr/share/lxc/hooks/lxc-pve-poststop-hook 101 lxc post-stop produced output: command 'umount -- /var/lib/lxc/.pve-staged-mounts/mp3' failed: exit code 32

lxc-start 101 20240306XXXXX9.167 DEBUG    conf - ../src/lxc/conf.c:run_buffer:311 - Script exec /usr/share/lxc/hooks/lxc-pve-poststop-hook 101 lxc post-stop produced output: umount: /var/lib/lxc/.pve-staged-mounts/mp4: not mounted.

lxc-start 101 20240306XXXXX9.167 DEBUG    conf - ../src/lxc/conf.c:run_buffer:311 - Script exec /usr/share/lxc/hooks/lxc-pve-poststop-hook 101 lxc post-stop produced output: command 'umount -- /var/lib/lxc/.pve-staged-mounts/mp4' failed: exit code 32

lxc-start 101 20240306XXXXX9.170 DEBUG    conf - ../src/lxc/conf.c:run_buffer:311 - Script exec /usr/share/lxc/hooks/lxc-pve-poststop-hook 101 lxc post-stop produced output: umount: /var/lib/lxc/.pve-staged-mounts/mp0: not mounted.

lxc-start 101 20240306XXXXX9.170 DEBUG    conf - ../src/lxc/conf.c:run_buffer:311 - Script exec /usr/share/lxc/hooks/lxc-pve-poststop-hook 101 lxc post-stop produced output: command 'umount -- /var/lib/lxc/.pve-staged-mounts/mp0' failed: exit code 32

lxc-start 101 20240306XXXXX9.173 DEBUG    conf - ../src/lxc/conf.c:run_buffer:311 - Script exec /usr/share/lxc/hooks/lxc-pve-poststop-hook 101 lxc post-stop produced output: umount: /var/lib/lxc/.pve-staged-mounts/mp1: not mounted.

lxc-start 101 20240306XXXXX9.173 DEBUG    conf - ../src/lxc/conf.c:run_buffer:311 - Script exec /usr/share/lxc/hooks/lxc-pve-poststop-hook 101 lxc post-stop produced output: command 'umount -- /var/lib/lxc/.pve-staged-mounts/mp1' failed: exit code 32

lxc-start 101 20240306XXXXX9.175 DEBUG    conf - ../src/lxc/conf.c:run_buffer:311 - Script exec /usr/share/lxc/hooks/lxc-pve-poststop-hook 101 lxc post-stop produced output: umount: /var/lib/lxc/.pve-staged-mounts/mp6: not mounted.

lxc-start 101 20240306XXXXX9.176 DEBUG    conf - ../src/lxc/conf.c:run_buffer:311 - Script exec /usr/share/lxc/hooks/lxc-pve-poststop-hook 101 lxc post-stop produced output: command 'umount -- /var/lib/lxc/.pve-staged-mounts/mp6' failed: exit code 32

lxc-start 101 20240306XXXXX9.214 INFO     conf - ../src/lxc/conf.c:run_script_argv:338 - Executing script "/usr/share/lxcfs/lxc.reboot.hook" for container "101", config section "lxc"
lxc-start 101 20240306XXXXX9.716 ERROR    lxc_start - ../src/lxc/tools/lxc_start.c:main:306 - The container failed to start
lxc-start 101 20240306XXXXX9.716 ERROR    lxc_start - ../src/lxc/tools/lxc_start.c:main:311 - Additional information can be obtained by setting the --logfile and --logpriority options

PVE version information:
Code:
root@srv:~# pveversion --verbose
proxmox-ve: 7.4-1 (running kernel: 5.15.136-1-pve)
pve-manager: 7.4-17 (running version: 7.4-17/513c62be)
pve-kernel-5.15: 7.4-10
pve-kernel-5.15.136-1-pve: 5.15.136-1
ceph-fuse: 14.2.21-1
corosync: 3.1.7-pve1
criu: 3.15-1+pve-1
glusterfs-client: 9.2-1
ifupdown: residual config
ifupdown2: 3.1.0-1+pmx4
libjs-extjs: 7.0.0-1
libknet1: 1.24-pve2
libproxmox-acme-perl: 1.4.4
libproxmox-backup-qemu0: 1.3.1-1
libproxmox-rs-perl: 0.2.1
libpve-access-control: 7.4.1
libpve-apiclient-perl: 3.2-1
libpve-common-perl: 7.4-2
libpve-guest-common-perl: 4.2-4
libpve-http-server-perl: 4.2-3
libpve-rs-perl: 0.7.7
libpve-storage-perl: 7.4-3
libspice-server1: 0.14.3-2.1
lvm2: 2.03.11-2.1
lxc-pve: 5.0.2-2
lxcfs: 5.0.3-pve1
novnc-pve: 1.4.0-1
proxmox-backup-client: 2.4.6-1
proxmox-backup-file-restore: 2.4.6-1
proxmox-kernel-helper: 7.4-1
proxmox-mail-forward: 0.1.1-1
proxmox-mini-journalreader: 1.3-1
proxmox-offline-mirror-helper: 0.5.2
proxmox-widget-toolkit: 3.7.3
pve-cluster: 7.3-3
pve-container: 4.4-6
pve-docs: 7.4-2
pve-edk2-firmware: 3.20230228-4~bpo11+2
pve-firewall: 4.3-5
pve-firmware: 3.6-6
pve-ha-manager: 3.6.1
pve-i18n: 2.12-1
pve-qemu-kvm: 7.2.0-8
pve-xtermjs: 4.16.0-2
qemu-server: 7.4-4
smartmontools: 7.2-pve3
spiceterm: 3.2-2
swtpm: 0.8.0~bpo11+3
vncterm: 1.7-1
zfsutils-linux: 2.1.14-pve1
 
You are trying to share a block device between two containers. This could lead to conflicts if both containers try to access the block device at the same time.
The reason why one container cannot start when the mount point is read-only can be seen with dmesg: Can't mount, would change RO state
The reason it worked with ZFS was because you were mounting a subvolume instead of a raw block device.

Instead, you could mount the disk on the host and then bind mount the directory containing the files into the containers.
Code:
pct set 101 -mp0 /mnt/shared,mp=/mnt/ct102-bindmount
pct set 102 -mp0 /mnt/shared,mp=/mnt/ct102-bindmount
 
  • Like
Reactions: Kingneutron
Thank you for your answer!
I will try the other approach with host based bind mounts.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!