Late Reply but one Option might be to mark the File as something that Proxmox VE shouldn't touch.
I do this for /etc/resolv.conf inside a PiHole LXC Container, but I guess with the right Name it would work for anything.
File...
I just had this happen to me.
It wasn't very straightforward, but this seems to work:
Kill the lxc-start Process that started the Container
Manually remove the Lock file
Use lxc-stop with --kill and --nolock Arguments to (try) to stop the...
Actually my Script (which I improved quite a bit in the Version I have locally) works OK.
Now the only Issue is about the NFS read-only Share Mount that was bind-mounted inside the Container (Group lxc_shares or 100000) which now is not...
Thanks for your in-depth Explanation :) .
Maybe to add yet another Attack Surface related to Mountpoints: what about the Case of a shared GPU via one or more of the following
dev0: /dev/dri/card0,mode=0660
dev1...
by default all unprivileged containers map to the same host range (typically 100000:65536). The actual isolation relies on multiple kernel layers, not just
UIDs:
1. Mount namespaces: each container has its own filesystem view. A process inside...
Most probably: https://bugzilla.proxmox.com/show_bug.cgi?id=7271
Please try downgrading pve-container like that apt install pve-container=6.0.18 which should fix the problem. A patch is already on the way.
Maybe for me it works fine because it's supposed to be read-only anyways.
It's basically all my Scripts I have on my NAS, mounted as read-only for Security Reasons:
mp0: /tools_nfs,mp=/tools_nfs,mountoptions=discard;noatime,ro=1
These show up as...
A reboot of the LXC Container does NOT work for me.
I need to first stop the LXC Container. Wait a few seconds. Then start the LXC Container again.
In a normal Situation, the Share is already mounted on the Host.
The only "Fix" I could find...
I am trying to understand a bit better the Security Architecture of using LXC Unprivileged Containers.
I am familiar with Virtual Machines (KVM) and also Podman Containers (similar to Docker), but relatively recently I've been deploying quite a...
Thanks for the Explanation.
Yeah, i guess the Variable Name is just very confusing. Why have 2 Settings if they have to be the same :) ?
I should have read the Description on Kernel.org but I just though it was something like:
Obviously that...
you need to either use 2MB pages or 1GB pages not mox like your trying to do.
huge pages will only be used on VMs you have set to use huge pages with else they will use standard 4k pages.
huge page allocation also needs to take into account numa...
In my case it wasn't working at all.
I used to have this in /etc/default/grub.d/hugepages.cfg:
GRUB_CMDLINE_LINUX="${GRUB_CMDLINE_LINUX} default_hugepagesz=2M hugepagesz=1G hugepages=64 transparent_hugepage=never"...
Obviously you weren't hit by these :)
- https://forum.proxmox.com/threads/debian-13-1-lxc-template-fails-to-create-start-fix.171435/
- https://forum.proxmox.com/threads/upgrading-pve-tries-to-remove-proxmox-ve-package.149101/
-...
I Updated this evening and indeed it seems to have fixed this specific Issue.
Seems to work also for me that don't have any write access to that mountpoint (from the Host I mean) :) .
Keep in Mind that that only works if your Host System can actually write to that Directory ;) .
It doesn't work in my Case with a read-only NFS Share exported by a remote NFS Server.
Thus a more permanent Fix is needed. Currently the only "fix"...
Well, maybe so, but a breaking change such as this one (which is NOT the first by a long Shot) which causes Containers to not start should be treated more carefully and not just forced upon Users.
It clearly doesn't work. System is uptodate...