ooops, my bad - I just haven't properly woken up this morning ;-)
I've only just noticed that the WARNing (above) is now referring to a different container, so it looks like upgrading systemd did in fact fix the problem in the original container and I just have to roll out this update to the...
I've just updated systemd in one of my CentOS-7 containers (as described above) to systemd-234 but when I reboot the container and re-run pve6to7 --full ...(sadly) it still reports the same problem:
"WARN: Found at least one CT (xxx) which does not support running in a unified cgroup v2...
Thanks for the tip! It was driving me nuts ...but after downgrading pve-container as suggested ("apt install pve-container=3.2-2") my systems are now happy again. Phew!
Hi,
Just wondering if you ever resolved this problem? I have 3 x (almost identical) Proxmox servers. All were running well, but after an update ONE of them is showing exactly the problem you describe. In my case the servers don't boot from ZFS - they just mount it as zfs-local storeage ...so at...
Hi,
I always run an automated proxmox backup to NFS storage on Friday nights ...but this morning, I'm seeing multiple backup failures in the form:
"ERROR: Backup of VM xxx failed - CT is locked (snapshot-delete)"
It's interesting to note that this error has only affected my 3 largest CTs...
No it's still an issue... but it's just been over-taken by other (more urgent) issues so I haven't looked at it for a while. Interestingly though, one of the most annoying things is that after a re-boot the server seems to arbitrarily re-assign ens5 to the other of the 2 interfaces on the card...
Hi Fabian,
Thanks for reaching out.... what extra info can I provide?
I'm running Proxmox Virtual Environment 5.0-31 (Debian 9)
I have installed 3 x Chelsio N320E T320 dual-port SFP+ 10Gb cards. One card has been installed in each of my two Proxmox servers and one in my OpenNAS server. The...
+1 ... I'm having exactly the same issue, but can't shed any more light I'm afraid! I wonder if we should report it as a "bug" and hope for a kernel update?
This looks like the problem I'm currently having.
Does anyone have more news on this problem - or any ideas where to start looking?
/var/log/messages:Apr 19 15:00:01 srv60 kernel: [334093.652828] audit: type=1400 audit(1492578001.389:2832): apparmor="DENIED" operation="mount" info="failed type...
+1 - This fixed the issue for me too. It was driving me nutty - thank you Ruben!
Just edit /etc/rc.d/rc.sysinit and comment out the line that says: /sbin/start_udev - then re-boot the container.
Bingo!
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.