Docker in LXC problem after PVE kernel update.

vanes

Member
Nov 23, 2018
16
1
23
40
Yesterday i updated PVE 6.0 to latest kernel and then docker in LXC container stoped working. Need some help.
When i run docker run hello-world i got this:
Code:
root@Docker-LXC:~# docker run hello-world
docker: Error response from daemon: OCI runtime create failed: container_linux.go:345: starting container process caused "process_linux.go:430: container init caused \"rootfs_linux.go:58: mounting \\\"proc\\\" to rootfs \\\"/var/lib/docker/vfs/dir/051e79b9cbe59a624ddd067b07168309886dce2c29368368aef6960fc319796d\\\" at \\\"/proc\\\" caused \\\"permission denied\\\"\"": unknown.
ERRO[0000] error waiting for container: context canceled
Yesterday before update everything worked fine, need some help

Container conf.
Code:
rch: amd64
cores: 2
features: keyctl=1,nesting=1
hostname: Docker-LXC
memory: 2048
net0: name=eth0,bridge=vmbr0,firewall=1,hwaddr=6E:3E:16:7E:9C:B9,ip=dhcp,type=veth
onboot: 1
ostype: ubuntu
rootfs: local-zfs:subvol-100-disk-0,size=512G
swap: 512
unprivileged: 1
Package version:
Code:
proxmox-ve: 6.0-2 (running kernel: 5.0.18-1-pve)
pve-manager: 6.0-5 (running version: 6.0-5/f8a710d7)
pve-kernel-5.0: 6.0-6
pve-kernel-helper: 6.0-6
pve-kernel-5.0.18-1-pve: 5.0.18-3
pve-kernel-5.0.15-1-pve: 5.0.15-1
ceph-fuse: 12.2.11+dfsg1-2.1
corosync: 3.0.2-pve2
criu: 3.11-3
glusterfs-client: 5.5-3
ksm-control-daemon: 1.3-1
libjs-extjs: 6.0.1-10
libknet1: 1.10-pve2
libpve-access-control: 6.0-2
libpve-apiclient-perl: 3.0-2
libpve-common-perl: 6.0-3
libpve-guest-common-perl: 3.0-1
libpve-http-server-perl: 3.0-2
libpve-storage-perl: 6.0-7
libqb0: 1.0.5-1
lvm2: 2.03.02-pve3
lxc-pve: 3.1.0-63
lxcfs: 3.0.3-pve60
novnc-pve: 1.0.0-60
proxmox-mini-journalreader: 1.1-1
proxmox-widget-toolkit: 2.0-5
pve-cluster: 6.0-5
pve-container: 3.0-5
pve-docs: 6.0-4
pve-edk2-firmware: 2.20190614-1
pve-firewall: 4.0-7
pve-firmware: 3.0-2
pve-ha-manager: 3.0-2
pve-i18n: 2.0-2
pve-qemu-kvm: 4.0.0-5
pve-xtermjs: 3.13.2-1
qemu-server: 6.0-7
smartmontools: 7.0-pve2
spiceterm: 3.1-1
vncterm: 1.6-1
zfsutils-linux: 0.8.1-pve1
 
Last edited:
This seems to be a bug related to some changes in lxc-pve regarding apparmor. You can file a bug in our bugtracker here.

A workaround is to add the following line to your '<vmid>.conf':
Code:
lxc.apparmor.raw: mount,
(note the comma ',' at the end)

Keep in mind that this will tell apparmor to allow all mounts.
 
@vanes, thanks for posting your issue and thank you @Stefan_R for the fix. This was kinda troublesome for me as well, and had a hard time finding the cause.

Do we have any option to run PVE6 with a stable kernel instead of kernel which is still kinda in development?
 
This bug is not related to the kernel itself, it is an issue with apparmor blocking access. The 5.0 series kernel shipped with PVE 6.0 is considered stable; running custom kernels is possible, but not supported by us.
 
Hi my experience is that since lxc-pve: 3.1.0-62 the LXC nested mode is not working.

If you want nested LXC with Docker you need to get back to lxc-pve: 3.1.0-61 and wait for a bug fix in lxc-pve.

Solution:
  • apt-get install lxc-pve=3.1.0-61
  • apt-mark hold lxc-pve >>> bug solved apt-mark unhold lxc-pve

Best Tim
 
  • Like
Reactions: PlOrAdmin and vanes
Solution:
  • apt-get install lxc-pve=3.1.0-61
  • apt-mark hold lxc-pve >>> bug solved apt-mark unhold lxc-pve
 
No way. Docker and ZFS with LXC is it worth to go this fragile way until it gets rock solid.
 
1. ZFS as subvol with block file format spares you the docker overlay2 file-system stuff. ;-)
2. Very Easy with LXC Mount Point (MP) concept of PVE no comparison to VM stuff.
2.1 PVE creates a with the MP a same name (ID) ZFS subvolume.

We are looking forward in 2020 to test it in cluster enviroment , >> step by step

Maybe on a single node, but especially not in a cluster.



You can't even use ZFS with Docker inside of a LX(C) container as Docker on "real" ZFS can, can you? This alone is a total no-go in production. How do you handle this?
 
(1) is clear and exactly what I want to know. LXC abstracts the ZFS completely for security reasons so that you cannot access your ZFS inside of your container. Have you weakened the security to do so (e.g. allowing access to /dev/zfs) or how do you use ZFS inside of LXC for Docker?

(2) yes, but does not work in a cluster. You need external storage for this and we're back with NFS or CEPH. If you use Kubernetes, there is a distributed storage provider that does ZFS-over-NFS (similar to ZFS-over-iSCSI) if you want to have ZFS. I really like this idea but could not convince the PVE developers that ZFS-over-NFS is a great idea for cluster-LXC.
 
1. The Answer is zfs subvolumes >>> A LXC doesn't has access to the zfs (like zfs list or zpool) but it gets the dataset a want to give.

2. I'm looking forward to this. I believe we can use it with a permanent the snapshot (zfs send) concept for LXC zfs dataset on different Nodes far away from ceph.
 
1. The Answer is zfs subvolumes >>> A LXC doesn't has access to the zfs (like zfs list or zpool) but it gets the dataset a want to give.

Hmm, I still don't get how this can be ZFS, so that Docker sees it as ZFS. Could you please give the output of

Code:
docker info 2>/dev/null | grep 'Storage Driver'

from inside of your Docker LXC on ZFS container?
 
Hmm, I still don't get how this can be ZFS, so that Docker sees it as ZFS. Could you please give the output of

Code:
docker info 2>/dev/null | grep 'Storage Driver'

from inside of your Docker LXC on ZFS container?
/dev/null | grep 'Storage Driver'
Storage Driver: vfs >> what is based of the zfs subvolumes.

There is a different between ZFS Dataset and subvolume.
 
/dev/null | grep 'Storage Driver'
Storage Driver: vfs >> what is based of the zfs subvolumes.

There is a different between ZFS Dataset and subvolume.

Thank you for reporting back. As far as the documentation of vfs goes, the driver is inferior to anything else with lower performance and uses more space, but is the only driver that works on any block based backend. I don't see why this should be preferred over the real ZFS driver for Docker, which is CoW-based, super fast, has quota and snapshot support (for cloning) but does obviously not work in LXC. The ZFS driver is even superior than any overlay filesystem due to its internal CoW design.
 
Thank you for reporting back. As far as the documentation of vfs goes, the driver is inferior to anything else with lower performance and uses more space, but is the only driver that works on any block based backend. I don't see why this should be preferred over the real ZFS driver for Docker, which is CoW-based, super fast, has quota and snapshot support (for cloning) but does obviously not work in LXC. The ZFS driver is even superior than any overlay filesystem due to its internal CoW design.
Of course if it is finally pure native ZFS it would be the best
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!