I've migrated a number of hosts from PVE 3.4 to PVE 4.1 and I followed the instructions (stop CT, backup CT, copy backup, restore, reconfigure network).
Most of my hosts use an internal init script to start an application server. That application server creates a socket, to which an internal Nginx web server tries to connect. This worked before.
With PVE 4.1 the web server isn't able to read/write to that socket anymore. The socket gets created with permissions root:root 0755, whereas it should be root:root 0777. When I use pct exec <vid> chmod 777 <socket_path> it works again. I've tried every way I could think of to set umask and/or file permissions but wasn't able to successfully create that socket from the start.
This happens both with my own CTs which were based on a Debian 7 OpenVZ template, as well as a gitlab installation which uses different init scripts.
Since I've seen loads of AppArmor messages in /var/log/message reporting the denial of mounts from the CTs, I've added lxc.aa_profile: unconfined to LXC configs, but this did not help. I've also noticed that unlike my OpenVZ setup entering a LXC does not create a fresh login environment, eg set paths from .bash_profile.
Christian
Most of my hosts use an internal init script to start an application server. That application server creates a socket, to which an internal Nginx web server tries to connect. This worked before.
With PVE 4.1 the web server isn't able to read/write to that socket anymore. The socket gets created with permissions root:root 0755, whereas it should be root:root 0777. When I use pct exec <vid> chmod 777 <socket_path> it works again. I've tried every way I could think of to set umask and/or file permissions but wasn't able to successfully create that socket from the start.
This happens both with my own CTs which were based on a Debian 7 OpenVZ template, as well as a gitlab installation which uses different init scripts.
Since I've seen loads of AppArmor messages in /var/log/message reporting the denial of mounts from the CTs, I've added lxc.aa_profile: unconfined to LXC configs, but this did not help. I've also noticed that unlike my OpenVZ setup entering a LXC does not create a fresh login environment, eg set paths from .bash_profile.
- Do I need to have those lxc.aa_profile entries in my LXC config?
- Why are my UNIX sockets created with different permissions than before?
- Is it intended that LXC won't create a fresh environment when entering a container?
Christian