I am trying to get a NFS share accessible by an unprivileged LXC. I've been following the guide here, mounting the NFS share on my host, then bind mounting the host directory to my container.
The problem I am having is that when the mount point...
@news
Profix?
Danke für den Link, schaue ich mir mal an!
Also machst du erst ein lokales Backup und syncst das dann mit rsync auf die externe Platte? Mit LUKS verschlüsseln klingt nach einer guten Idee..
I have seen others report problems with importing sometimes... and thought I would share one that can cause the issue that I don't think has been widely mentioned... (mainly because it happened today... down to about 6 vms out of almost...
oh cra*,i should have not looked into this thread :)
i welcome myself to this issue with dell pro max t2 onboard piece of...
i think i was downloading 500mbit tops (500mbit from isp) when this happened
funny thing is,i went to meshcentral(other...
Hi SinistersPisces,
can you elaborate a little bit, on how you setup the VM? And which services are used for network interface management?
When using the Debian cloudimages, they will ship with netplan and the according network management...
Thanks for the reply.
I agree that it seems to be storage related but I don't really think so.
Let me know what you think with the following info
- I have a total of 3 identical hosts, same harware, same pve, and more or less same usage. they are...
I've ran into the same problem, took me a while trying to figure out what the problem was until I found this thread. Downgrading pve-container to 6.0.18 resolved it for me, thanks. Will hold off updating my proxmox nodes until an updated version...
Hello,
I'm troubleshooting this now, but wanted to start a thread to document what happened. Hopefully, someone will find this in the future if they're troubleshooting something similar.
I've got a Debian 13 VM. I set it up with NIC 1, on VLAN...
unfortunately, I also got caught on this. It's affecting all unprivileged containers with bind mounts. I caught mine by trying to migrate a container after updating.
It doesn't matter if its RO or not. My NFS share is RW. Current workaround...
The resolving commit for mentioned vioscsi (and viostor) bug was merged 21 Jan 2026 into virtio master (commit cade4cb, corresponding tag mm315).
So if the to-be-released version will be tagged as >= mm315, the patch will be there.
As of me...
The resolving commit for mentioned vioscsi (and viostor) bug was merged 21 Jan 2026 into virtio master (commit cade4cb, corresponding tag mm315).
So if the to-be-released version will be tagged as >= mm315, the patch will be there.
As of me...
It looks like you run into the bug reported here: https://bugzilla.proxmox.com/show_bug.cgi?id=7271
Feel free to chime in there!
To temporarily downgrade pve-container you can run apt install pve-container=6.0.18
I ran into this same issue today. This thread fixed it for me: https://forum.proxmox.com/threads/proxmox-9-1-5-breaks-lxc-mount-points.180161/
tldr: downgrade pve-container to 6.0.18
davfs2 has stopped functioning in both privileged and unprivileged LXC containers with FUSE enabled, as well as on the Proxmox host itself. The issue occurs when mounting WebDAV resources (tested with a cloud storage service), where the mount...
So just like my other guide, this is more for my own records so I can come back and refer to it.
I know there are scripts for these things, but I prefer doing them manually so I can learn along the way.
Maybe it can help someone else in the...
Grundsätzlich ist das ja auch keine falsche Annahme, da ja ZFS streng genommen kein echter shared Storage ist (eben weil man da immer einen Datenverlust hat). Aber wenn es für den eigenen Zweck reicht, stört das ja nicht ;)
Die Kirsche auf der Sahnetorte ist, dass das Veeam Plugin beim Full VM Restore statt der Anzahl Cores, die Anzahl Sockel nimmt. Ist bei Veeam v13.0.1 und Proxmox 9.1.1 passiert. Schludrigkeit hoch 3