Had a situation where constraints from apparmor were causing problems with cpanel's dovecot. The container is NOT unpriviledged and not protected.
The cpanel support guy said I need
lxc.aa_profile = unconfined
But from what I...
Something funky with pct enter -- this just stared happening, wasnt occuring before. Something's changed (no no packages have been updated on the container that i know of... but obviously something changed while I wasnt looking...)
root@arch:/etc/pve/nodes/arch/lxc# pct enter 909
website:/# ps...
Which version is the minimal fixed version #?
pve-kernel-4.15.18-16-pve amd64 4.15.18-41 [52.5 MB]
pve-kernel-4.15.18-12-pve amd64 4.15.18-36 [52.5 MB]
during a single update, want to be sure which of my other hosts need upgrading.
Update: this of course doenst dynamically generate the
lxc.cgroup.devices.allow = b 230:16 rwm
entry which should extend to all 230:* device nodes. If you have a trusted environment, could add entries for as many volumes as you think you'll ever need (ie :32 :48 :64 etc etc on up, seems to...
Some more helpful details - I guess I hadn't rebooted since tuning - and /dev/zd## drives can renumber randomly if you've created/removed other zvols. At any rate, for whatever reason, they changed on me.
So instead of using rootfs:/dev/zd16 for eg in your rootfs lxc/$CTID.conf file options...
Aha, that was it. I dont specifically remember doing anything to rsyslog, but /dev/log was not there.
This helped:
https://unix.stackexchange.com/questions/317064/how-do-i-restore-dev-log-in-systemdrsyslog-host
had to use the symlink solution at the end after restarting the systemd socket...
note that the container command (and container) seem to behave properly, just get this warning.
pveversion:
proxmox-ve: 5.3-1 (running kernel: 4.15.18-12-pve)
pve-manager: 5.3-12 (running version: 5.3-12/5fbbbaf6)
pve-kernel-4.15: 5.3-3
pve-kernel-4.15.18-12-pve: 4.15.18-35...
whenever i issue a pct comand I get
setlogsock(): type='unix': path not available at /usr/share/perl5/PVE/SafeSyslog.pm line 38.
Is there a path missing somewhere? This was after a recent upgrade to latest.
Not a simple fix, unfortunately.
Is there a way to list specific device nodes as available to all unprivileged containers, I cant imagine a major risk exposing a read-only /dev/random or /dev/urandom to containers.
How are /dev/null and /dev/zero allowed?
Seems...
Seems LXC is susceptible to a container-escape problem. Just wondering about updates for this issue.
https://seclists.org/oss-sec/2019/q1/119
At this point in time debian has no patches yet.
https://security-tracker.debian.org/tracker/CVE-2019-5736
Why isn't centos 5.8 supported? I had to edit this code or the CentOS in /usr/share/perl5/PVE/LXC/Setup/CentOS.pm.
Changed the 6 to a 5, seems to run ok:
if ($release =~ m/release\s+(\d+\.\d+)(\.\d+)?/) {
if ($1 >= 5 && $1 < 8) {
$version = $1;
}
Just ran up against this issue myself. Terrible there's no easy solution from LXC.
Yeah, openVZ was far superior in accounting in many many ways -- you could get your own vmstat, your own load counter, your own IP list off each container immediately and easily - and centrally reported. Figuring...
need more details - did you move your vps container to an ext4 partition on a zvol? Creating zvols, mounting them and copying to them is general linux/zfs, not specific to promox. Lots of help on stackexchange or oracle zfs docs on how.
Figured it out. Here's how:
my container has a zvol on /dev/zd16:
/dev/zd16 76G 5.2G 67G 8% /rpool/data/subvol-202-disk-1
added some lxc permissions to all containers (since im just running cpanel here on this node):
since zd16 is
brw-rw---- 1 root disk 230, 16...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.