Well, maybe so, but a breaking change such as this one (which is NOT the first by a long Shot) which causes Containers to not start should be treated more carefully and not just forced upon Users.
It clearly doesn't work. System is uptodate...
I tried everything suggested above, nothing helped. Only adding this line worked for me.
ProxMox 9.1.4, GPU 3090 pass-through to Win11 VM, for Star Citizen EAC.
I confirm I was also just affected by this
I opened another Issue a few Minutes ago about it:
https://forum.proxmox.com/threads/lxc-fails-to-start-when-using-read-only-mountpoint.180440/
I have a Mountpoint /tools_nfs that is mounted read-only on most of my Systems, and I want to be able to pass it to each Container.
It used to work OK until now.
I suspect one of the last System Updates messed it up :( .
Attached is the Debug...
I have had the names of the VM's disappear several times now in Server View - but the names show up in Options for each VM. Several VM's are still running. I can change the view but none will show the names.
This is a standalone server - not...
After the update, Proxmox now handles container mount points more strictly. This mainly affects unprivileged containers.
Mounted folders now show up as root:root inside the container, so services like MySQL can’t access their files.
If a mount...
You confuse lxcs ( Linux containers) with lxd ( Software for managing vms and lxcs) Both lxd ( and it's fork incus) and ProxmoxVE manage vms and lxcs. Lxcs are more lightweight than vms but only work for Linux applications since they run directly...
The pve-container project and its tooling (e.g., pct = Proxmox Container Toolkit) is already doing all that and more. We're using LXC similarly as LXD does: as low level toolkit which we control with higher level management, things like a syscall...
These really aren't containers, but rather LXD spins up a regular virtual machine.
All which Proxmox VE can also already do fine.
(On a techical level, containerization of Windows on e.g. Linux isn't even possible, as containers share the host...
OP mentioned TRIM… in ZFS it’s only enabled for nvme disks by default. See here and the later posts that links to the reasons.
https://forum.proxmox.com/threads/server-disk-i-o-delay-100-during-cloning-and-backup.173051/post-835149
ZFS has a...
It helps, but it is not the main culprit with PVE. PVE writes constantly in it sqlite3 database that is the storage behind /etc/pve and also stores metrics into rrdtool.
Most of the time, those are very small writes that gets write amplified a...
Hello,
Thank you for wounderful software, it fit very well in place where zimbra would be (with extra steps and only as a cog in bigger machine).
I move message based on user mailbox interaction with /Spam folder. If he mark mail as spam the...
Was? Das prunen geht eigentlich recht schnell. Die garbage-collection oder verify-Job werden bei hdds aber immer lange dauern, weil beim PBS der Datenbestand in zig kleine Dateien ( Chunks ) aufgeteilt wird und jeder (!) chunk dafür eingelesen...
OP mentioned TRIM… in ZFS it’s only enabled for nvme disks by default. See here and the later posts that links to the reasons.
https://forum.proxmox.com/threads/server-disk-i-o-delay-100-during-cloning-and-backup.173051/post-835149
ZFS has a...
Some notes on this:
You don't need "host" CPU type.
Memory Isolation uses Hyper-V to run critical processes inside a virtualized environment. Basically, right now it requires nested virtualization.
As you may know, nested virtualization can be...