Unified cgroup v2 layout Upgrade warning PVE 6.4 to 7.0

Well this is almost a solution, minus the fact that nothing runs, you get permissioned denied. Oi. I think my only hope is to move it to a host I have not upgraded yet. Can't reset root password, can't change passwords. I entered this CT via pct enter <ctid>
 

Attachments

  • Screenshot_3.png
    Screenshot_3.png
    27.4 KB · Views: 44

yswery

Member
May 6, 2018
68
5
13
52
Well this is almost a solution, minus the fact that nothing runs, you get permissioned denied. Oi. I think my only hope is to move it to a host I have not upgraded yet. Can't reset root password, can't change passwords. I entered this CT via pct enter <ctid>
Remove the unprivileged var conf and instead try adding to yur LXC conf the following

Code:
lxc.cgroup.devices.allow =
lxc.cgroup.devices.deny =

Let us know how that goes?
 

ness1602

Well-Known Member
Oct 28, 2014
242
31
48
Serbia
There isnt a newer version of systemd in ubuntu 16.04.
Even with ESM, i have that on one machine.
 

ape_sinklair

Member
Jul 23, 2012
2
0
21
Remove the unprivileged var conf and instead try adding to yur LXC conf the following

Code:
lxc.cgroup.devices.allow =
lxc.cgroup.devices.deny =

Let us know how that goes?
That worked for me. Thanks for your help.

I had to set it for all containers and not only for the old 16.04 one. Time to get rid of the Ubuntu 16.04 Container and remove all the settings again...
 

Stoiko Ivanov

Proxmox Staff Member
Staff member
May 2, 2018
6,148
857
148
Remove the unprivileged var conf and instead try adding to yur LXC conf the following

Code:
lxc.cgroup.devices.allow =
lxc.cgroup.devices.deny =

Let us know how that goes?

A patch for privileged containers running on legacy cgroup layouts was sent to the pve-devel list for discussion:
https://lists.proxmox.com/pipermail/pve-devel/2021-July/049452.html

I'd suggest to remove those lines again, once a working version becomes available, because this essentially allows a privileged container (=root on host) to access arbitrary devices - you lose quite a bit of the isolation of the container.
 

Stoiko Ivanov

Proxmox Staff Member
Staff member
May 2, 2018
6,148
857
148

ilia987

Member
Sep 9, 2019
221
7
23
34

Clive Austin

Member
Mar 14, 2017
12
1
23
63
I've just updated systemd in one of my CentOS-7 containers (as described above) to systemd-234 but when I reboot the container and re-run pve6to7 --full ...(sadly) it still reports the same problem:

"WARN: Found at least one CT (xxx) which does not support running in a unified cgroup v2 layout. Either upgrade the Container distro or set systemd.unified_cgroup_hierarchy=0 in the Proxmox VE hosts' kernel cmdline! Skipping further CT compat checks"

Do you think this diagnostic might just be a "false positive"? Any further ideas would be very welcome... thanks people!
 

Clive Austin

Member
Mar 14, 2017
12
1
23
63
ooops, my bad - I just haven't properly woken up this morning ;-)

I've only just noticed that the WARNing (above) is now referring to a different container, so it looks like upgrading systemd did in fact fix the problem in the original container and I just have to roll out this update to the other CentOS-7 containers. Apologies for the noise!
 

atari

Member
Mar 4, 2016
13
0
21
48
I have same problem whit centos7 ct on proxmox 5.4-3 moving to 7.1-7, after updateing systemd it works on new proxmox, thanks!!!
 

gardar

Member
May 21, 2014
29
1
23
have simmiler issue:
i tried to do it, but the centos lxc dont have access internet

You can bring the network interface up with dhclient or by setting a static ip, or you can copy the rpm's to your host and then transfer them over to the container.

For example:

Bash:
$ pct enter YOURCONTAINERID
$ dhclient
$ wget https://copr.fedorainfracloud.org/coprs/jsynacek/systemd-backports-for-centos-7/repo/epel-7/jsynacek-systemd-backports-for-centos-7-epel-7.repo -O /etc/yum.repos.d/jsynacek-systemd-centos-7.repo
$ yum update systemd

or

Bash:
# On your pve host
$ mkdir /root/systemd-backports && cd /root/systemd-backports
$ wget -r https://download.copr.fedorainfracloud.org/results/jsynacek/systemd-backports-for-centos-7/epel-7-x86_64/00580867-systemd/ --no-parent
$ cd download.copr.fedorainfracloud.org/results/jsynacek/systemd-backports-for-centos-7/epel-7-x86_64/00580867-systemd
$ for x in *.rpm; do pct push YOURCONTAINERID $x /root/$x; done
$ pct enter YOURCONTAINERID
$ yum localinstall *.rpm
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get your own in 60 seconds.

Buy now!