Proxmox VE 7.0 (beta) released!

DEBUG conf - conf.c:run_buffer:305 - Script exec /usr/share/lxcfs/lxc.mount.hook 100 lxc mount produced output: missing /var/lib/lxcfs/proc/ - lxcfs not running?

is lxcfs running?
* `systemctl status -l lxcfs`
* `journalctl -u lxcfs.service`
 
lxcfs show running:
Code:
root@vhost07 ~ # systemctl status -l lxcfs
● lxcfs.service - FUSE filesystem for LXC
     Loaded: loaded (/lib/systemd/system/lxcfs.service; enabled; vendor preset: enabled)
     Active: inactive (dead) since Mon 2021-07-12 11:48:38 CEST; 38min ago
       Docs: man:lxcfs(1)
   Main PID: 655 (code=exited, status=0/SUCCESS)

Jun 06 08:44:29 vhost07 lxcfs[655]: - cpuview_daemon
Jun 06 08:44:29 vhost07 lxcfs[655]: - loadavg_daemon
Jun 06 08:44:29 vhost07 lxcfs[655]: - pidfds
Jul 12 11:40:33 vhost07 systemd[1]: Reloading FUSE filesystem for LXC.
Jul 12 11:40:33 vhost07 systemd[1]: Reloaded FUSE filesystem for LXC.
Jul 12 11:48:38 vhost07 systemd[1]: Stopping FUSE filesystem for LXC...
Jul 12 11:48:38 vhost07 lxcfs[655]: Running destructor lxcfs_exit
Jul 12 11:48:38 vhost07 fusermount[2285342]: /bin/fusermount: failed to unmount /var/lib/lxcfs: Invalid argument
Jul 12 11:48:38 vhost07 systemd[1]: lxcfs.service: Succeeded.
Jul 12 11:48:38 vhost07 systemd[1]: Stopped FUSE filesystem for LXC

Output from journalctl -u lxcfs.service:
Code:
-- Boot cb9439213de34d469d5d96be11c18e40 --
Jun 06 08:44:28 vhost07 systemd[1]: Started FUSE filesystem for LXC.
Jun 06 08:44:29 vhost07 lxcfs[655]: Running constructor lxcfs_init to reload liblxcfs
Jun 06 08:44:29 vhost07 lxcfs[655]: mount namespace: 4
Jun 06 08:44:29 vhost07 lxcfs[655]: hierarchies:
Jun 06 08:44:29 vhost07 lxcfs[655]:   0: fd:   5:
Jun 06 08:44:29 vhost07 lxcfs[655]:   1: fd:   6: name=systemd
Jun 06 08:44:29 vhost07 lxcfs[655]:   2: fd:   7: memory
Jun 06 08:44:29 vhost07 lxcfs[655]:   3: fd:   8: cpu,cpuacct
Jun 06 08:44:29 vhost07 lxcfs[655]:   4: fd:   9: net_cls,net_prio
Jun 06 08:44:29 vhost07 lxcfs[655]:   5: fd:  10: devices
Jun 06 08:44:29 vhost07 lxcfs[655]:   6: fd:  11: hugetlb
Jun 06 08:44:29 vhost07 lxcfs[655]:   7: fd:  12: rdma
Jun 06 08:44:29 vhost07 lxcfs[655]:   8: fd:  13: perf_event
Jun 06 08:44:29 vhost07 lxcfs[655]:   9: fd:  14: pids
Jun 06 08:44:29 vhost07 lxcfs[655]:  10: fd:  15: blkio
Jun 06 08:44:29 vhost07 lxcfs[655]:  11: fd:  16: cpuset
Jun 06 08:44:29 vhost07 lxcfs[655]:  12: fd:  17: freezer
Jun 06 08:44:29 vhost07 lxcfs[655]: Kernel supports pidfds
Jun 06 08:44:29 vhost07 lxcfs[655]: Kernel supports swap accounting
Jun 06 08:44:29 vhost07 lxcfs[655]: api_extensions:
Jun 06 08:44:29 vhost07 lxcfs[655]: - cgroups
Jun 06 08:44:29 vhost07 lxcfs[655]: - sys_cpu_online
Jun 06 08:44:29 vhost07 lxcfs[655]: - proc_cpuinfo
Jun 06 08:44:29 vhost07 lxcfs[655]: - proc_diskstats
Jun 06 08:44:29 vhost07 lxcfs[655]: - proc_loadavg
Jun 06 08:44:29 vhost07 lxcfs[655]: - proc_meminfo
Jun 06 08:44:29 vhost07 lxcfs[655]: - proc_stat
Jun 06 08:44:29 vhost07 lxcfs[655]: - proc_swaps
Jun 06 08:44:29 vhost07 lxcfs[655]: - proc_uptime
Jun 06 08:44:29 vhost07 lxcfs[655]: - shared_pidns
Jun 06 08:44:29 vhost07 lxcfs[655]: - cpuview_daemon
Jun 06 08:44:29 vhost07 lxcfs[655]: - loadavg_daemon
Jun 06 08:44:29 vhost07 lxcfs[655]: - pidfds
Jul 12 11:40:33 vhost07 systemd[1]: Reloading FUSE filesystem for LXC.
Jul 12 11:40:33 vhost07 systemd[1]: Reloaded FUSE filesystem for LXC.
Jul 12 11:48:38 vhost07 systemd[1]: Stopping FUSE filesystem for LXC...
Jul 12 11:48:38 vhost07 lxcfs[655]: Running destructor lxcfs_exit
Jul 12 11:48:38 vhost07 fusermount[2285342]: /bin/fusermount: failed to unmount /var/lib/lxcfs: Invalid argument
Jul 12 11:48:38 vhost07 systemd[1]: lxcfs.service: Succeeded.
Jul 12 11:48:38 vhost07 systemd[1]: Stopped FUSE filesystem for LXC.
 
Is there a way to downgrade back down to Proxmox 6? After I did the upgrade, my lxc container can't run Docker anymore.
Error:
Code:
Jul 12 13:42:21 turnkey-docker dockerd[969]: time="2021-07-12T13:42:21.695603476Z" level=warning msg="Your kernel does not support cgroup memory limit"
Jul 12 13:42:21 turnkey-docker dockerd[969]: time="2021-07-12T13:42:21.695683015Z" level=warning msg="Unable to find cpu cgroup in mounts"
Jul 12 13:42:21 turnkey-docker dockerd[969]: time="2021-07-12T13:42:21.695764428Z" level=warning msg="Unable to find blkio cgroup in mounts"
Jul 12 13:42:21 turnkey-docker dockerd[969]: time="2021-07-12T13:42:21.695845461Z" level=warning msg="Unable to find cpuset cgroup in mounts"
Jul 12 13:42:21 turnkey-docker dockerd[969]: time="2021-07-12T13:42:21.696277343Z" level=warning msg="mountpoint for pids not found"
Jul 12 13:42:21 turnkey-docker dockerd[969]: time="2021-07-12T13:42:21.697084420Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
Jul 12 13:42:21 turnkey-docker dockerd[969]: time="2021-07-12T13:42:21.697200348Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
Jul 12 13:42:21 turnkey-docker dockerd[969]: time="2021-07-12T13:42:21.699779568Z" level=info msg="pickfirstBalancer: HandleSubConnStateChange: 0xc000047650, TRANSIENT_FAILURE" module=grpc
Jul 12 13:42:21 turnkey-docker dockerd[969]: time="2021-07-12T13:42:21.700182876Z" level=info msg="pickfirstBalancer: HandleSubConnStateChange: 0xc000047650, CONNECTING" module=grpc
Jul 12 13:42:22 turnkey-docker dockerd[969]: Error starting daemon: Devices cgroup isn't mounted

I tried to set "systemd.unified_cgroup_hierarchy=0" in GRUB_CMDLINE_LINUX, but that hinders the container from starting, as mentioned earlier.
What I can tell it probably has to do with the change in Proxmox 7 that cgroup v2 instead of cgroup is used.

Also running "cgroupfs-umount" followed by "cgroupfs-mount" didn't help, neither in the host nor in the guest
 
Last edited:
  • Like
Reactions: geeked
Is there a way to downgrade back down to Proxmox 6? After I did the upgrade, my lxc container can't run Docker anymore.
Error:
Code:
Jul 12 13:42:21 turnkey-docker dockerd[969]: time="2021-07-12T13:42:21.695603476Z" level=warning msg="Your kernel does not support cgroup memory limit"
Jul 12 13:42:21 turnkey-docker dockerd[969]: time="2021-07-12T13:42:21.695683015Z" level=warning msg="Unable to find cpu cgroup in mounts"
Jul 12 13:42:21 turnkey-docker dockerd[969]: time="2021-07-12T13:42:21.695764428Z" level=warning msg="Unable to find blkio cgroup in mounts"
Jul 12 13:42:21 turnkey-docker dockerd[969]: time="2021-07-12T13:42:21.695845461Z" level=warning msg="Unable to find cpuset cgroup in mounts"
Jul 12 13:42:21 turnkey-docker dockerd[969]: time="2021-07-12T13:42:21.696277343Z" level=warning msg="mountpoint for pids not found"
Jul 12 13:42:21 turnkey-docker dockerd[969]: time="2021-07-12T13:42:21.697084420Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
Jul 12 13:42:21 turnkey-docker dockerd[969]: time="2021-07-12T13:42:21.697200348Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
Jul 12 13:42:21 turnkey-docker dockerd[969]: time="2021-07-12T13:42:21.699779568Z" level=info msg="pickfirstBalancer: HandleSubConnStateChange: 0xc000047650, TRANSIENT_FAILURE" module=grpc
Jul 12 13:42:21 turnkey-docker dockerd[969]: time="2021-07-12T13:42:21.700182876Z" level=info msg="pickfirstBalancer: HandleSubConnStateChange: 0xc000047650, CONNECTING" module=grpc
Jul 12 13:42:22 turnkey-docker dockerd[969]: Error starting daemon: Devices cgroup isn't mounted

I tried to set "systemd.unified_cgroup_hierarchy=0" in GRUB_CMDLINE_LINUX, but that hinders the container from starting, as mentioned earlier.
What I can tell it probably has to do with the change in Proxmox 7 that cgroup v2 instead of cgroup is used.

Also running "cgroupfs-umount" followed by "cgroupfs-mount" didn't help, neither in the host nor in the guest
This was my issue too. Tried many different troubleshooting ideas with no success yet.
 
My current workaround: make a backup, install Proxmox 6.4 and restore all VM's and containers
 
What's weird is sometimes docker will install but then when I try to run an image it pops an AppArmor error.
 
this is an issue with the new cgroupv2 feature affecting "old" distros like Centos 7 and Ubuntu 16.04. we're working on improving the handling there.
Please mention this as a big warning somewhere in the beginning of the release note, that folks running Centos 7 and we are a lot should not upgrade. I just upgraded all my cluster to discover after reboot that all my containers (90% centos 7 adn 10% debian 10) have no network interfaces up and no services running and issuing an ifconfig ethX up starts the interface with no ip address, ..., no routing, Is there a way to go back ?
 
Please mention this as a big warning somewhere in the beginning of the release note, that folks running Centos 7 and we are a lot should not upgrade. I just upgraded all my cluster to discover after reboot that all my containers (90% centos 7 adn 10% debian 10) have no network interfaces up and no services running and issuing an ifconfig ethX up starts the interface with no ip address, ..., no routing, Is there a way to go back ?
yes, please read the upgrade documentation!

https://pve.proxmox.com/wiki/Upgrade_from_6.x_to_7.0#Old_Container_and_CGroupv2

you confirmed that you read those instructions before proceeding with the upgrade - and those instructions prominently mention running the check script, at least once with --full, which also checks all containers for incompatible distros.
 
  • Like
Reactions: jasonsansone
yes, please read the upgrade documentation!

https://pve.proxmox.com/wiki/Upgrade_from_6.x_to_7.0#Old_Container_and_CGroupv2

you confirmed that you read those instructions before proceeding with the upgrade - and those instructions prominently mention running the check script, at least once with --full, which also checks all containers for incompatible distros.
Thanks fabian, i run the check and had a warning for only one container and thus proceeded to the upgrade. As not understanding the implications of incompatibility with cgroupv2 vs cgroupv1 I admit not having read the upgrade documentation carefully. My appologies for my previous comment.
 
Have you by any chance been using systemd from backports?
If the container's systemd could be upgraded to 231 through a backport, would CentOS 7 LXCs work correctly in a cgroupv2 environment?
Update: Attempting to upgrade systemd to 234 in CentOS 7 ended in tears, with several exceptions and errors. The backport utilised was https://copr.fedorainfracloud.org/coprs/jsynacek/systemd-backports-for-centos-7/ .

Also, quoting from PVE docs:
CGroup Version Compatibility: Another important difference is that the devices controller is configured in a completely different way. Because of this, file system quotas are currently not supported in a pure cgroupv2 environment.
By file system quotas, do you mean file system quotas for users inside the LXC?

Are quotas likely to be supported in cgroupv2 in the future?
 
Last edited:
By file system quotas, do you mean file system quotas for users inside the LXC?

Are quotas likely to be supported in cgroupv2 in the future?
Inside the container, yes, via quota-tools (quotacheck/edquota & friends). Supporting this will require some bigger changes to how we start up containers.
 
Inside the container, yes, via quota-tools (quotacheck/edquota & friends). Supporting this will require some bigger changes to how we start up containers.
Do you think those changes in 7.x to starting up containers to enable cgroupv2 user quotas will/could be done before 6.4 reaches EoL?
 
Inside the container, yes, via quota-tools (quotacheck/edquota & friends). Supporting this will require some bigger changes to how we start up containers.

Is there any progress or approximate ETA on supporting user quotas in containers in PVE 7.x with cgroupv2 enabled?
 
Last edited:
Is there any updates on this issue ?
Which issue exactly? As general announcement thread this is a relatively big one with many topics in there. FS quota support inside cgv2 containers isn't actively worked on IIRC, you could use VMs for such use cases, they provided a better isolation in general, which is often goes a long with the case where user quotas are a requirement.
 
Which issue exactly? As general announcement thread this is a relatively big one with many topics in there. FS quota support inside cgv2 containers isn't actively worked on IIRC, you could use VMs for such use cases, they provided a better isolation in general, which is often goes a long with the case where user quotas are a requirement.
Im sorry, i was talking about he quotas on lxc container.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!