read-only LXC mount-point fails

Republicus

Well-Known Member
Aug 7, 2017
137
20
58
40
Its impossible to mount a read-only mount-point on NFS storage.

Workaround is removing Read-only option -- which allows the container to boot.

I really wish to mount read-only and this used to work in PVE 5+


Code:
● pve-container@20005.service - PVE LXC Container: 20005
   Loaded: loaded (/lib/systemd/system/pve-container@.service; static; vendor preset: enabled)
   Active: failed (Result: exit-code) since Tue 2019-11-05 18:43:30 EST; 37s ago
     Docs: man:lxc-start
           man:lxc
           man:pct
  Process: 10126 ExecStart=/usr/bin/lxc-start -n 20005 (code=exited, status=1/FAILURE)

Nov 05 18:43:29 node02 systemd[1]: Starting PVE LXC Container: 20005...
Nov 05 18:43:30 node02 lxc-start[10126]: lxc-start: 20005: lxccontainer.c: wait_on_daemonized_start: 865 No such file or directory - Failed to receive the container state
Nov 05 18:43:30 node02 lxc-start[10126]: lxc-start: 20005: tools/lxc_start.c: main: 329 The container failed to start
Nov 05 18:43:30 node02 lxc-start[10126]: lxc-start: 20005: tools/lxc_start.c: main: 332 To get more details, run the container in foreground mode
Nov 05 18:43:30 node02 lxc-start[10126]: lxc-start: 20005: tools/lxc_start.c: main: 335 Additional information can be obtained by setting the --logfile and --logpriority option
Nov 05 18:43:30 node02 systemd[1]: pve-container@20005.service: Control process exited, code=exited, status=1/FAILURE
Nov 05 18:43:30 node02 systemd[1]: pve-container@20005.service: Failed with result 'exit-code'.
Nov 05 18:43:30 node02 systemd[1]: Failed to start PVE LXC Container: 20005.
 
Cannot reproduce. More details please: working & not-working config from `# pct config $vmid`.
 
I was trying to mount an lxc raw disk from another container as read-only.
In the instant case, my originating container is acting as a LetsEncrypt SSL generator with RW access to a raw container disk. I want to share that disk as a RO mount-point across several containers.

I believe this used to work properly.

Ive noticed a couple conditions that will allow this to work.
Namely, it depends on the order the containers are started... Insomuch: as long as the LAST container to start is the container mounting the raw disk as RW, all will be well. But, if the container with RW starts before secondary RO mount-points/containers start -- only the originating RW container will start.


Orig lxc mount RW => Second lxc mount RW: SUCCESS, both containers start.

Orig lxc mount RW => Second lxc mount RO: FAILS. Only orig container starts.

Orig lxc mount RO => Second lxc mount RO: SUCCESS, both containers start.

Second lxc mount RO => Orig lxc mount RW : SUCCESS, both containers start.


Code:
arch: amd64
cores: 2
features: keyctl=1,nesting=1
hostname: multistream
memory: 2048
mp0: nfs-async:71150/vm-71150-disk-1.raw,mp=/mnt/letsencrypt,ro=1,size=2G
net0: name=eth0,bridge=vmbr0,firewall=1,gw=192.168.20.1,hwaddr=77:B3:1F:E0:F4:A6 ,ip=192.168.20.5/29,tag=20,type=veth
onboot: 1
ostype: debian
rootfs: nfs-async:20005/vm-20005-disk-0.raw,size=8G
swap: 2048
unprivileged: 1
 
It looks like you're sharing raw image files with multiple containers (mp0's vmid differs from that of the rootfs entry)? That'll most certainly result in unexpected breakage sooner or later. You should use a bind mounted directory instead, that should also get around the ordering issue...
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!