Search results

  1. S

    [SOLVED] Docker inside LXC (net.ipv4.ip_unprivileged_port_start error)

    For me it's not so much about RAM as it's about Disk Space. RAM wise I agree that with Dynamic Memory Management (VM Memory using Balloon Feature) it works already much better. I use podman (rootless) instead of docker. As a Reference Point, although of course that's highly subjective, I have...
  2. S

    [SOLVED] Debian 13 LXC networking.service failed

    I lost approx. 1h this evening with the Error dhcpcd[139]: eth0: ipv6_start: Cannot assign requested address. Nothing worked. I tried Managed Host Configuration (checked), Unmanaged Host configuration (unchecked), DHCPv6, SLAAC, Static. Nothing worked. In desperation and seeing this Post...
  3. S

    Trying to set IPv6 token, adding LXC options in container config file

    Late Reply but one Option might be to mark the File as something that Proxmox VE shouldn't touch. I do this for /etc/resolv.conf inside a PiHole LXC Container, but I guess with the right Name it would work for anything. File /etc/.pve-ignore.resolv.conf: #...
  4. S

    Nothing works anymore "can't lock file '/run/lock/lxc/pve-config-xxx.lock"

    I just had this happen to me. It wasn't very straightforward, but this seems to work: Kill the lxc-start Process that started the Container Manually remove the Lock file Use lxc-stop with --kill and --nolock Arguments to (try) to stop the Container (most likely it already stopped) Use pct...
  5. S

    LXC Unprivileged Container Isolation

    Actually my Script (which I improved quite a bit in the Version I have locally) works OK. Now the only Issue is about the NFS read-only Share Mount that was bind-mounted inside the Container (Group lxc_shares or 100000) which now is not accessible anymore :(. I might need to setup an...
  6. S

    LXC Unprivileged Container Isolation

    Thanks for your in-depth Explanation :) . Maybe to add yet another Attack Surface related to Mountpoints: what about the Case of a shared GPU via one or more of the following dev0: /dev/dri/card0,mode=0660 dev1: /dev/dri/renderD128,gid=992,mode=0666 lxc.mount.entry: /dev/net dev/net none...
  7. S

    Mountpoints for LXC containers broken after update

    Maybe for me it works fine because it's supposed to be read-only anyways. It's basically all my Scripts I have on my NAS, mounted as read-only for Security Reasons: mp0: /tools_nfs,mp=/tools_nfs,mountoptions=discard;noatime,ro=1 These show up as owned by nobody:nobody in the LXC Container (it's...
  8. S

    LXC - Remount mountpoint without rebooting Container

    A reboot of the LXC Container does NOT work for me. I need to first stop the LXC Container. Wait a few seconds. Then start the LXC Container again. In a normal Situation, the Share is already mounted on the Host. The only "Fix" I could find for the initial (after Host Boot) is: pvenode config...
  9. S

    LXC Unprivileged Container Isolation

    I am trying to understand a bit better the Security Architecture of using LXC Unprivileged Containers. I am familiar with Virtual Machines (KVM) and also Podman Containers (similar to Docker), but relatively recently I've been deploying quite a few LXC Unprivileged Containers, I can...
  10. S

    How to enable 1G hugepages?

    Thanks for the Explanation. Yeah, i guess the Variable Name is just very confusing. Why have 2 Settings if they have to be the same :) ? I should have read the Description on Kernel.org but I just though it was something like: Obviously that wasn't the Case .
  11. S

    How to enable 1G hugepages?

    In my case it wasn't working at all. I used to have this in /etc/default/grub.d/hugepages.cfg: GRUB_CMDLINE_LINUX="${GRUB_CMDLINE_LINUX} default_hugepagesz=2M hugepagesz=1G hugepages=64 transparent_hugepage=never" GRUB_CMDLINE_LINUX_DEFAULT="${GRUB_CMDLINE_LINUX_DEFAULT} default_hugepagesz=2M...
  12. S

    Mountpoints for LXC containers broken after update

    Obviously you weren't hit by these :) - https://forum.proxmox.com/threads/debian-13-1-lxc-template-fails-to-create-start-fix.171435/ - https://forum.proxmox.com/threads/upgrading-pve-tries-to-remove-proxmox-ve-package.149101/ -...
  13. S

    Mountpoints for LXC containers broken after update

    You have a Point. Isn't it pve-test where the modifications land first though, before they get to pve-no-subscription ?
  14. S

    Mountpoints for LXC containers broken after update

    I Updated this evening and indeed it seems to have fixed this specific Issue. Seems to work also for me that don't have any write access to that mountpoint (from the Host I mean) :) .
  15. S

    Mountpoints for LXC containers broken after update

    Keep in Mind that that only works if your Host System can actually write to that Directory ;) . It doesn't work in my Case with a read-only NFS Share exported by a remote NFS Server. Thus a more permanent Fix is needed. Currently the only "fix" (besides downgrading & apt Pinning) is to comment...
  16. S

    Mountpoints for LXC containers broken after update

    Well, maybe so, but a breaking change such as this one (which is NOT the first by a long Shot) which causes Containers to not start should be treated more carefully and not just forced upon Users. It clearly doesn't work. System is uptodate with latest Updates as of 2026-02-08 14h36 GMT+1...
  17. S

    Mountpoints for LXC containers broken after update

    I confirm I was also just affected by this I opened another Issue a few Minutes ago about it: https://forum.proxmox.com/threads/lxc-fails-to-start-when-using-read-only-mountpoint.180440/
  18. S

    LXC Fails to start when using read-only Mountpoint

    I have a Mountpoint /tools_nfs that is mounted read-only on most of my Systems, and I want to be able to pass it to each Container. It used to work OK until now. I suspect one of the last System Updates messed it up :( . Attached is the Debug Log obtained by lxc-start -F --logpriority=DEBUG...