Mountpoints for LXC containers broken after update

Bash:
# apt changelog pve-container
pve-container (6.1.1) trixie; urgency=medium
  * setup: plugin interface: add missing check_systemd_nesting stub.
  * fix #7270: setup: add no-op check_systemd_nesting implementation for
    unmanaged containers.
  * setup: add no-op detect_architecture for unmanaged CTs.
  * fix #7271: exclude non-volume mount points, like e.g. bind mounts, from
    attribute preservation.
 -- Proxmox Support Team <support@proxmox.com>  Fri, 06 Feb 2026 15:40:44 +0100

Just installed now and appears all is working fine for my lxc containers that had nfs mounts. So fix #7271 was what was required.

Yep, I've updated this morning and the read only NFS shares on my TrueNAS are working on my containers. The NFS shares read only attribute is set on TrueNAS.

I've used apt-mark unhold pve-container to get the container updated.
 
I wish Proxmox VE Developers would test this kind of Breaking Changes BEFORE deploying them.

Well, using non-enterprise PVE repos we ARE the beta testers. If they deployed this on the Enterprise repos then yes you have a valid point. This is the risk we take using non-enterprise repos. These kind of breakages are very rare and quickly fixed. I use non-enterprise repos for my home lab and it's not critical to me if they break. I do have a couple of production clusters at work that are using enterprise repos.
 
Yep, I've updated this morning and the read only NFS shares on my TrueNAS are working on my containers. The NFS shares read only attribute is set on TrueNAS.

I've used apt-mark unhold pve-container to get the container updated.
I Updated this evening and indeed it seems to have fixed this specific Issue.

Seems to work also for me that don't have any write access to that mountpoint (from the Host I mean) :) .
 
  • Like
Reactions: Darkk
Well, using non-enterprise PVE repos we ARE the beta testers. If they deployed this on the Enterprise repos then yes you have a valid point. This is the risk we take using non-enterprise repos. These kind of breakages are very rare and quickly fixed. I use non-enterprise repos for my home lab and it's not critical to me if they break. I do have a couple of production clusters at work that are using enterprise repos.
You have a Point. Isn't it pve-test where the modifications land first though, before they get to pve-no-subscription ?
 
You have a Point. Isn't it pve-test where the modifications land first though, before they get to pve-no-subscription ?
Per the documentation it does state: "It’s not recommended to use this on production servers, as these packages are not always as heavily tested and validated." Source: https://pve.proxmox.com/wiki/Package_Repositories#sysadmin_no_subscription_repo

Without knowing the development cycle my only guess is that it was missed in testing before it got to the no-sub repo. It's worth noting however this is probably the first bug I've personally encountered since using Proxmox since ~2017.
 
  • Like
Reactions: Johannes S
Per the documentation it does state: "It’s not recommended to use this on production servers, as these packages are not always as heavily tested and validated." Source: https://pve.proxmox.com/wiki/Package_Repositories#sysadmin_no_subscription_repo

Without knowing the development cycle my only guess is that it was missed in testing before it got to the no-sub repo. It's worth noting however this is probably the first bug I've personally encountered since using Proxmox since ~2017.
Obviously you weren't hit by these :)

- https://forum.proxmox.com/threads/debian-13-1-lxc-template-fails-to-create-start-fix.171435/
- https://forum.proxmox.com/threads/upgrading-pve-tries-to-remove-proxmox-ve-package.149101/
- https://forum.proxmox.com/threads/proxmox-kernel-6-8-12-2-freezes-again.154875/

And many others (I intentionally left out what was only specific to me and/or due to (mis)configuration Issues.

Specifically the first one is very similar to this one. At Host reboot, all the containers affected won't start up.
 
  • Like
Reactions: hnguk
pve-container 6.1.1 still broken the unpriviledged LXC with cross LXC ZFS mp. where works when downgrade/lock to 6.0.18

Code:
run_buffer: 571 Script exited with status 30
lxc_init: 845 Failed to run lxc.hook.pre-start for container "1102"
__lxc_start: 2046 Failed to initialize container "1102"
TASK ERROR: startup for container '1102' failed
 
I also think that pve-container 6.1.1 is still broken. I am running a priviledged LXC to which I mount a ZFS subvol from my main ZFS pool. A user (1050) inside the LXC needs to be able to write to a folder inside this subvol, so I made user 1050 the owner via chown. However, since pve-container 6.1.1 the owner of the subvol and all folders inside it is reset to root everytime the container or the host are rebooted.
With pve-container 6.0.18 the owner setting is kept as set (which I think is how it should be). I will wait until pve-container 6.1.2 is released, as I believe it will address my specific issue.
 
Maybe for me it works fine because it's supposed to be read-only anyways.
It's basically all my Scripts I have on my NAS, mounted as read-only for Security Reasons:
Code:
mp0: /tools_nfs,mp=/tools_nfs,mountoptions=discard;noatime,ro=1

These show up as owned by nobody:nobody in the LXC Container (it's read-only and owned by root:root on the Proxmox VE Host).

On the other Hand, for read-write Access, this (still) works with correct read-write Permissions & ownership (for Podman Containers, see this Thread):
Code:
lxc.mount.entry: /zdata/<application> home/podman/containers/data/<application> none bind,create=dir 0 0

(yes, the Path is correct, there is no leading / in home/podman in the Way these Mountpoints are specified ;) )