Well, for some weird Reason the -I include Directive didn't seem to work.
Editing the File /var/lib/dkms/kernel-mft-dkms/4.22.1/build/mst_backward_compatibility/mst_pci/mst_pci_bc.c manually and prexing the Paths with ../../nnt_driver/ did the...
Just install Proxmox VE on top of Debian Trixie :) .
I've been doing that for several Years now since I also had VGA Issues back in the Days.
Script Set is by no means perfect but it works well enough for me.
Configure based on Example...
For me it's not so much about RAM as it's about Disk Space.
RAM wise I agree that with Dynamic Memory Management (VM Memory using Balloon Feature) it works already much better.
I use podman (rootless) instead of docker.
As a Reference Point...
Late Reply but one Option might be to mark the File as something that Proxmox VE shouldn't touch.
I do this for /etc/resolv.conf inside a PiHole LXC Container, but I guess with the right Name it would work for anything.
File...
I just had this happen to me.
It wasn't very straightforward, but this seems to work:
Kill the lxc-start Process that started the Container
Manually remove the Lock file
Use lxc-stop with --kill and --nolock Arguments to (try) to stop the...
Actually my Script (which I improved quite a bit in the Version I have locally) works OK.
Now the only Issue is about the NFS read-only Share Mount that was bind-mounted inside the Container (Group lxc_shares or 100000) which now is not...
Thanks for your in-depth Explanation :) .
Maybe to add yet another Attack Surface related to Mountpoints: what about the Case of a shared GPU via one or more of the following
dev0: /dev/dri/card0,mode=0660
dev1...
by default all unprivileged containers map to the same host range (typically 100000:65536). The actual isolation relies on multiple kernel layers, not just
UIDs:
1. Mount namespaces: each container has its own filesystem view. A process inside...
Most probably: https://bugzilla.proxmox.com/show_bug.cgi?id=7271
Please try downgrading pve-container like that apt install pve-container=6.0.18 which should fix the problem. A patch is already on the way.
Maybe for me it works fine because it's supposed to be read-only anyways.
It's basically all my Scripts I have on my NAS, mounted as read-only for Security Reasons:
mp0: /tools_nfs,mp=/tools_nfs,mountoptions=discard;noatime,ro=1
These show up as...
A reboot of the LXC Container does NOT work for me.
I need to first stop the LXC Container. Wait a few seconds. Then start the LXC Container again.
In a normal Situation, the Share is already mounted on the Host.
The only "Fix" I could find...
I am trying to understand a bit better the Security Architecture of using LXC Unprivileged Containers.
I am familiar with Virtual Machines (KVM) and also Podman Containers (similar to Docker), but relatively recently I've been deploying quite a...
Thanks for the Explanation.
Yeah, i guess the Variable Name is just very confusing. Why have 2 Settings if they have to be the same :) ?
I should have read the Description on Kernel.org but I just though it was something like:
Obviously that...
you need to either use 2MB pages or 1GB pages not mox like your trying to do.
huge pages will only be used on VMs you have set to use huge pages with else they will use standard 4k pages.
huge page allocation also needs to take into account numa...
In my case it wasn't working at all.
I used to have this in /etc/default/grub.d/hugepages.cfg:
GRUB_CMDLINE_LINUX="${GRUB_CMDLINE_LINUX} default_hugepagesz=2M hugepagesz=1G hugepages=64 transparent_hugepage=never"...
Obviously you weren't hit by these :)
- https://forum.proxmox.com/threads/debian-13-1-lxc-template-fails-to-create-start-fix.171435/
- https://forum.proxmox.com/threads/upgrading-pve-tries-to-remove-proxmox-ve-package.149101/
-...